Environmental weatherman

Over the past half century, the environmental jeremiad has emerged as a minor publishing genre. Such works typically provide a grim assessment of the environmental situation before warning us that the time has come: Act now, and doom may be averted; dither, and all may be lost. Prescriptions for salvation follow, and in the end we glimpse a possible future that is not only environmentally sustainable but also just and personally fulfilling.

The formulaic aspects of such works often make for tiresome reading, but the genre as a whole is nonetheless valuable. Just because a prophecy is repeatedly made and continually ignored does not mean that it has no truth. Key global environmental indicators do indeed continue to decline, just as new threats constantly emerge. Meanwhile, new generations of young adults keep rising, ready for the message. The eco-jeremiad, in other words, serves an important purpose.

Not all would-be prophets, however, are equal to the task. The best (Rachel Carson) rouse society, whereas the worst (Jeremy Rifkin) discredit environmentalism with false alarms. The right tone, no doubt, is difficult to hit. Warnings should be stern but not strident, rooted carefully in science yet willing to leap from the never-quite-certain scenarios of scientific modeling to concerted political action. The changes advocated, moreover, must appear bold enough to address the underlying disease, yet seem feasible in the end. Throughout, today’s pessimism must be balanced by tomorrow’s hope.

James Gustave Speth’s Red Sky at Morning fulfills the requirements of eco-prophesy as well as any recent contenders. Speth is superbly qualified for the role. Dean of the Yale School of Forestry and Environmental Sciences, Speth founded and presided over the World Resources Institute. He also cofounded the Natural Resources Defense Council and served as chief executive officer of the United Nations Development Programme. Drawing on his broad array of practical experience, Speth is able to take the required global view of our environmental predicament yet remain grounded in U.S. politics and policymaking.

Overall, Speth’s analysis is cogent and comprehensive, and his prescriptions are well considered. He identifies biological impoverishment, increasing pollution, and global climate change as the “megatrends” of environmental deterioration. Such threats, he argues, could best be met through an “eightfold way”: stabilizing population, eliminating severe poverty, developing benign technologies, embracing environmental economics, reducing and reorienting consumption, enhancing knowledge, instituting good governance, and–in “the most fundamental transition of all”–transforming human consciousness. Although the final recommendation is rather ethereal, the others are substantial. Of particular import, under the good-governance category, is the proposal for establishing a World Environment Organization (WEO). Ideally, the WEO would stand alongside such groups as the World Trade Organization and the World Health Organization to provide an institutional framework for addressing issues that demand global action.

Speth’s message is aimed widely, couched in the language of the broad church of U.S. environmentalism. Writing at times as a political centrist, Speth advocates market approaches to pollution abatement, celebrates the (admittedly rather rare) environmentally responsible corporation, excoriates state subsidies that encourage questionable activities, and acclaims many forms of high technology. Inclining ultimately to the moderate left, however, he comes down hard against the perfectly free market, finding promise rather in good governance, effective regulation, and international institution building. Nodding occasionally to the more radical left, he keenly embraces the bottom-up approach of grass-roots organizations and spontaneous movements, praising at times the antiglobalization movement.

Finessing contradictions

Appealing to such a broad spectrum of environmentalism is perhaps politically savvy. Whether it makes for an intellectually coherent account, however, is another matter. When a single paragraph moves, with equal enthusiasm, from such a mainstream proposal as “green labeling” to the fiscally risky institution of “barter networks” before concluding with dreams of eco-salvation through a “cultural renaissance of poetry, storytelling [and] dance,” my doubts begin to grow.

More troubling is Speth’s tendency to paper over the many mutually exclusive viewpoints found within the environmental movement. It is one thing to advocate the preservation of biodiversity in order to maintain genetic stock for biotechnology, but it is quite another to expect assent from the antiglobalization protestors who are subsequently lauded for leading us into a “change of consciousness.” Any such transformed consciousness would, after all, recoil at the previously endorsed splicing of “spider genes into cows . . . [to] produce spider-web protein in their milk.” But does Speth really advocate genetic engineering, especially in such an extravagant form? Or is he merely willing to deploy it as a hypothetical reason for preserving biodiversity? Unfortunately, he seems unwilling to take a stance on controversial issues that might reveal and widen cleavages within the environmental community.

Contrastingly, but hardly surprisingly, Speth has no compunctions about taking on self-proclaimed environmentalists who have strayed outside of the fold by denying the severity of the crisis. Looming here is Bjørn Lomborg, whose “skeptical environmentalism” is viewed by many in the movement as little more than cornucopian antienvironmentalism. To his credit, Speth does not simply vilify Lomborg, as do many green writers, but instead empirically counters several of his soothing assertions of environmental health. Whether he does so in adequate detail, however, is unclear; I would think that Lomborg’s 500-page tome deserves rather more than a two-page rejoinder in such an important book as Red Sky at Morning.

The Lomborg issue, moreover, goes beyond contested assessments of ecological deterioration. The Skeptical Environmentalist makes an important subsidiary argument about data presentation, charging many prominent environmentalists with manipulating their statistics in order to twist the truth. Lomborg’s critics usually ignore, or at least sidestep, this part of his case, preferring to focus on the state of the planet. In a typical gesture, Speth dismisses all such allegations with an off-handed acknowledgement of “exaggerations and also honest mistakes in environmental advocacy.” But until I see Lomborg’s accusations receive the detailed rebuttals that they demand, I can only conclude that the problem goes rather deeper than that.

Cautioned by the Lomborg assault, Speth promises to avoid hyperbole. He usually keeps his pledge, but not always. Telling us that “a fourth of bird species are extinct,” Speth implies that these losses are recent, brought about by the current eco-calamity. That is hardly the case. What about the avian holocausts that followed Polynesian settlement on island after island across the Pacific? Not fitting within the standard litany, they apparently cannot be mentioned. At times, it seems as if Speth has a vague conception of the historicity of nature, tending to view the preindustrial world as an Edenic wonder. “The last great extinction episode occurred sixty-five million years ago with the obliteration of the dinosaurs,” he informs us, neglecting the inconvenient fact that some 85 percent of all large mammal species in the Western Hemisphere were exterminated, quite likely by Paleolithic humans, at the end of the Pleistocene Epoch. And in a telling phrase, Speth writes of Earth losing “a third of its original forest cover.” But when could this time of “origination” possibly have been? Human societies have been removing forests since the Neolithic, when global biomes were being transformed and relocated by early Holocene global warming.

But such criticisms are perhaps unfair, considering the intended function of the book. If one accepts that we do indeed face a global environmental crisis, one may easily forgive inconsistencies and exaggerations in a work designed to rally the faithful and counter the opposition. Why focus on the environmental movement’s missteps and fault lines, one may ask, when so much is at stake?

If viewed on its own terms, the book works well enough. Yet even in this regard, its message is weakened by one baffling omission: the current global military situation. Red Sky at Morning appears to have been written in an environmental bubble in which the events of September 11, 2001, and the responses that they received have no significance. In a book focused on the global condition, such an unwillingness to recognize the environmental reverberations of international conflict seems obtuse. It also squanders the opportunity to draw attention to the close potential linkages between U.S. foreign policy reform and environmental sustainability.

As has often been argued, U.S. military strategy often seems to partly follow U.S. energy requirements, which in turn are rooted in our settlement patterns and help determine our environmental policies. Wedded to a system of hypersuburbanization and unfettered individual mobility, we demand cheap gasoline and therefore balk at any reduction of carbon emissions. Ensuring plentiful and secure oil, in turn, seems to necessitate a tight alliance with Saudi Arabia: a country that apparently recycles petrodollars into fundamentalist and sometimes terror-linked madrasas across the Muslim world, and in which theocratic repression helps cooks the pot of fanatical anti-Americanism. Meanwhile, U.S. military adventures elsewhere in the region, aimed at quelling such tensions and securing alternative oil supplies, have not proved successful. With these considerations in mind, a security-linked “green” energy policy, based on conservation and on renewable sources, might be easier to sell to the U.S. public than we have hitherto thought.

A far-reaching reformed energy policy would demand major changes in our urban organization. As Speth recognizes, “smart growth” based on high-density infill development along public transportation corridors is the most sensible option. But here again, one must confront the internal paradoxes of U.S. environmentalism. Certainly in California’s Bay Area, every time a “smart” initiative is mooted, local neighbors rise up under a green banner to fight the good fight against the greedy developers who would pave over their favorite vacant lots and crowd their sidewalks. Exasperated builders, attempting to address a housing shortage that has pushed median prices over the half-million-dollar mark, must thus look elsewhere. They find it much easier to pave instead the farmlands of the northern San Joaquin Valley, linked to the Bay Area’s jobs by freeways perpetually clogged with SUVs.

But let us environmentalists not notice anything as awkward as all of that. It is so much safer and more satisfying, after all, to attack our common enemies.


Martin W. Lewis () teaches in the Program in International Relations at Stanford University.

Energy Futures

Vaclav Smil has done it again. He has written yet another important book on energy and has managed to make it interesting, readable, and rich with data and references.

He begins by examining the historical evolution of today’s energy system–the major trends; the energy transitions; the growth in the energy factors affecting global warming; the roots and impacts of wars on energy; and the social, technological, and economic forces that have affected energy production and use. He also explores the connections among the economics of energy use, the trends and interpretations of various energy indicators, environmental impacts, and externalities. And he reviews what is known about the more common air pollutants and summarizes their legal status; describes the growing role of carbon dioxide as a greenhouse gas; and, in recognition of current events and the terrorist attacks of September 11, 2001, discusses the revolutionary impacts of energy forms on modern warfare and terrorism.

In one cautionary note, Smil says the nation should not rely heavily on long-term energy forecasting when it comes to setting energy policies. Such forecasting, he says, has always been a failure–a conclusion that is difficult to disagree with. Reaching conclusions through the use of computers cloaks the results with an air of reliability that really is not warranted. He cites, for example, predictions about the emergence of specific technologies that turned out to be completely wrong. One need only think of the federal push for the adoption of liquid metal fast-breeder reactors to see how far wrong officialdom can be.

“We should abandon all detailed quantitative point forecasts,” he says, by which he means such things as total energy demand 20 years from now. In his mind, there are only two ways of looking ahead that are of value. One is to look at contingency scenarios for the results of a worldwide depression or a conflagration in the Middle East, and the second involves examining “no-regrets” scenarios that can guide reconciliation with the biosphere.

As to the future of fossil fuels, Smil examines the issue by asking, is the decline of global crude oil production imminent? This is, to put it mildly, an issue of some importance both nationally and worldwide. Indeed, one can argue that the United States is in a state of war in Iraq in large measure to ensure access to the vast petroleum reserves in the Middle East. The present dispute over global oil production began in 1956 when the Shell geophysicist M. King Hubbert predicted that in the United States, crude oil production in the lower 48 states would peak in the late 1960s and then start to decline. At that time, U.S. cumulative production had reached 50 billion barrels, and experts generally agreed that the total U.S. crude oil resource base was between 150 billion and 200 billion barrels. As it turned out, crude production peaked in 1970 and has been declining since, and current estimates point to a total quantity of crude oil equal to 200 billion barrels.

I have carried out a similar analysis for global conventional crude oil production, and the results suggest that the amount of ultimately recoverable global crude oil is at least 1,600 billion barrels, but could be as much as 2,600 or even 2,800 billion barrels. If these limits are true, then production is likely to peak between 2010 and 2020, not very far into the future.

What would be the implications of such global peaking? As oil economists point out, this would not signal the end of burning fossil fuels. There are huge deposits of solid carbon fuels waiting to be converted into liquids; Canada, for example, has huge deposits of oil sands, rivaling Saudi Arabia in sheer resources. There would be, however, a high price to be paid to pursue the synfuels path. Crude oils made from solids would have even higher carbon dioxide emissions than conventional crudes. Moreover, producing nonconventional oil could be an environmental mess, with extraction operations resembling surface mining for coal. Clearly, for reasons of global warming and environmental disturbances, large-scale oil production from oil sands would not be a sustainable element in a long-term energy strategy.

So how to replace fossil fuels? Smil describes a number of nonfossil options, including such renewables as flowing water, solar, wind, waves, and biomass. The technologies to harness these energy forms are described individually, along with their estimated benefits and costs. Smil is particularly fond of wind power and photovoltaics. But though he sees rapid growth in installations and major contributions of these technologies to world electrical supply, he offers no estimates of their potential contributions in the coming decades.

Looking at some numbers may suggest the possibilities. Consider wind power. In 1985, the peak wind capacity in California was just over 1 gigawatt (GW). Capacity had grown to 2.6 GW by the end of 2000, with a doubling in capacity planned during 2001. Wind was growing even faster in Europe, where it accounted for 13 percent of total generation by the end of 2000 and is expected to total 100 GW by 2030. Some plans forecast that by 2020 wind will account for 10 percent of the world’s electrical capacity. At these rates, wind can make sizable contributions to slowing the atmospheric buildup of greenhouse gases. But by how much? Smil’s book leaves that as a problem for the reader to determine.

Photovoltaics also have been growing rapidly. By 2000, cumulative installed capacity had reached 500 megawatts in the United States and almost 1 GW worldwide. According to the National Center for Photovoltaics, 3.2 GW of capacity will be installed in the United States by 2020. Ultimately, total installed capacity will reach 15 GW in the United States and 70 GW worldwide.

Not only are they renewable, wind and photovoltaics produce electricity without air pollution or greenhouse gas emissions. Unfortunately, they both pose major problems of intermittence and portability (for transportation). To overcome their intermittence, some form of backup or energy storage is needed. Advanced batteries would be one way to store excess power, but hydrogen combined with fuel cells may be even more promising. Hydrogen would be produced by electrolysis when demand from the grid exceeds the supply of renewable electricity. Hydrogen also could serve as a portable form of power, particularly for use in automobiles, trucks, and perhaps airplanes.

Smil discusses the technological status, cost, and performance of hydrogen technology in some detail. Two serious problems, he says, remain to be resolved. One problem is developing storage that is cheap and large enough in volume to power a vehicle for several hundred miles. The other is safety, though some observers argue that this problem can be dealt with through suitable safety regulations and appropriate technological fixes.

The book also considers the option that everyone concerned with energy faces at one time or another: nuclear power. On the future of this technology, Smil is not an optimist. He cites Alvin Weinberg’s Faustian bargain, whereby the benefits of clean nuclear power were given at a price of eternal vigilance and care that seems never to have been realized. The tradeoffs for building nuclear plants include, in the worst of circumstances, environmental (accidents) or military (proliferation) catastrophes. Smil also decries the government’s failure to store radioactive wastes, and he points out that the costs of building new nuclear plants and their levels of performance are highly uncertain.

Smil ends with a discussion of possible energy futures. Given his skepticism about forecasting, it should come as no surprise that he offers no modeling results with numerical estimates of future energy supply and demand. Rather, he offers what amounts to an essay on long-term energy trends and the factors affecting them. He also describes a collection of energy sources and technologies that he favors (renewables, hydrogen, hydro dams, and improved efficiency) and those he does not care for (fossil fuels, geoengineering, and nuclear power). He calls for subsidies to support the sources that are environmentally preferable: wind and photovoltaics.

Even in citing a role for improving energy efficiency, Smil is pessimistic that this approach will do much to control national energy demand. Efficiency improvements, he says, will not curb energy growth as much as expected, because of the rebound effect. That is, initial efforts to reduce energy consumption will lead to economic savings that consumers will spend, in the process creating more demand for energy.

One of his principal concerns is about future energy use and its impacts on global warming. He has little use for the Kyoto Protocol, declaring that “even a fully implemented Kyoto Protocol would have done little to prevent further substantial increases of [greenhouse gas] emissions,” and labeling the agreement merely “a timid start in the right direction.”

In one minor quibble with the book, I would have liked Smil to have done more in identifying and discussing the historical people who have been instrumental in the evolution of today’s modern energy system. But this gap can be easily overlooked given that the general topic–the strategic energy decisions we will have to make in the near future–is so vital. Smil covers well most of the important topics from a variety of perspectives, including environmental, economic, and technical performance. On these issues, his book serves as a good introduction. It should prove especially useful for academics who teach courses on contemporary energy issues, and may well help point the way to a more secure energy future.

Completing the Transformation of U.S. Military Forces

On taking office in 2001, Secretary of Defense Donald Rumsfeld announced his intention to transform the U.S. armed forces to meet today’s threats of rogue states and transnational terrorism. The effectiveness of U.S. fighting forces in Afghanistan and Iraq indicate that the transformation, which some have called a “revolution in military affairs,” is on the right path. But many technical challenges remain to be met, and today’s headlines make it clear that the end of combat between organized armed forces does not necessarily herald the end of a war. If the United States chooses to rest on its military laurels, the nation may in the long run lose the great benefits that have accrued from the armed forces’ efforts thus far.

The U.S. armed forces today are characterized in large measure by their unique ability to attack opposing military forces with enough precision and speed to prevail against heavy odds. This capability is, as much as any of the armed forces’ other features, indicative of their transformation over the past decade and a half from forces tailored for major land war in Western Europe to those far better suited for the new kinds of warfare that have come to face the nation since the Berlin Wall came down in 1989.

The transformation actually began very gradually but picked up speed after U.S. military leaders learned valuable lessons in the first Gulf War. There, the forces designed for Europe achieved their goal magnificently but were ponderous and hence slow to respond to changing battlefield conditions when agility was needed. The pace of change picked up again in the mid-1990s, when there was a need to limit civilian damage and casualties during military operations in the Balkans. After taking office in 2001, Rumsfeld institutionalized the notion of force transformation to meet the new world conditions. The success of the transformed military was evident in action in Afghanistan and in Iraq, where a U.S. force far smaller than the one that waged the first Gulf War defeated Iraq’s armies and overran the country in about a month.

Two major advances in technology helped make the transformation possible. First, the joining of Global Positioning System (GPS) navigation updates with coarse inertial guidance reduced the cost of precision-guided weapons that the military services had previously been reluctant to use in abundance because of their high unit costs. Second, vast improvements in information processing and communications enabled the forces to be embedded in a broad information and targeting network that, together with ensuing changes in command relationships that shifted battlefield responsibility and authority to lower levels of command, makes them far more agile and responsive to battlefield conditions than the Cold War era forces had been. Retired Vice Admiral Arthur Cebrowski, who now heads the Department of Defense’s (DOD’s) Office of Force Transformation, dubbed this new approach “Network-Centric Warfare,” and it was convincingly demonstrated in the seamless melding of air and land forces in Afghanistan and Iraq.

The force changes were accomplished through large substitutions of capital for labor, just as productivity has been increased in the civilian economy. The exchanges can be illustrated by comparing the armed forces’ size and budgets for 1970 and 2003. The 1970s forces had almost none of today’s precision engagement capability, whereas today’s forces are essentially built around that capability. The DOD budgets of the two years, in constant dollars, are within 5 percent of each other, but today’s forces are far smaller than the forces were in 1970 at the height of the Vietnam War. The Army has only one-third as many members in 2004 as it did in 1970, and the other services have shrunk as well. The equipment cost per person in the active forces has approximately doubled, and the budget-allocated personnel cost per person has increased by about two-thirds. These increases reflect the incorporation of more sophisticated weapons and the prevalence of more highly educated and better-trained members in the all-volunteer military. On average, the United States spends just short of $300,000 per person in the armed forces–twice as much as its closest allies and far more than any potential antagonists.

The returns on this huge investment are found to have been far larger than the investment when they can be measured in dollar terms. At the macro level, smaller forces and the reduced length of organized conflict from the first to the second Gulf Wars show enough money saved to pay for at least two years’ worth of the overall investment. At the micro level, the cost to destroy any military target from the air has been reduced to as little as a fifteenth of the cost under previous conditions.

Indicators of intangible benefit can be derived from comparisons of performance in conflicts that were similar, even though no two conflicts are exactly identical. Compare the length and number of military casualties in the Soviet and U.S. campaigns in Afghanistan or consider the number of civilian casualties in the bombing of Dresden in World War II compared with the more recent Belgrade and Baghdad bombing attacks, which were directed against similar targets.

The Soviet war in Afghanistan, using “old-fashioned” forces numbering about 100,000, took place over nine years and resulted in 15,000 soldiers killed in action. They eventually withdrew without succeeding in their mission of controlling the country. In comparison, U.S. forces, about 10,000 strong and working with Afghan allies, defeated the Taliban and Al Qaeda in less than three months and suffered about 100 soldiers killed in action. The carpet bombing of Dresden cost between 35,000 and 135,000 lives (estimates vary widely) and destroyed the city, whereas the 1999 Belgrade and 2003 Baghdad bombing campaigns cost on the order of a few hundred civilian lives with minimal incidental destruction.

Don’t stop now

Although the performance of today’s force has been impressive, further improvement is needed. Evidence of weakness includes mistaken attacks against civilian targets, fratricidal attacks against friendly force elements, and the difficulty of finding and disposing of opposition leadership. Moreover, the U.S. military’s new force structure and posture are vulnerable in many ways. As is often the case, the vulnerabilities are inherent in the fundamentals of military organization and modes of combat. To name just a few of the more critical ones:

  • The targeting network and the guidance systems of the forces’ most lethal weapons can be foiled by the use of cover, concealment, and deception (for example, tanks hidden in haystacks in Kosovo) and by interference with satellite navigation signals.
  • The communication networks, some of them commercial, on which the military depends to deliver targeting and other data to processing centers and to the forces in the field, and to manage those forces’ highly dispersed operations, can be disrupted by many kinds of electronic and physical attack.
  • U.S. forces have had the luxury of air supremacy in recent conflicts. However, the Soviet Union, before its collapse, had been fielding long-range, high-altitude, antiaircraft missile systems capable of denying this air supremacy or at least putting it at serious risk. For example, the Russian S-300 SAM system can reach out about 120 miles to attack aircraft in level flight and was claimed by the Soviets to have counter-stealth characteristics. This system could attack the essential U.S. surveillance and command and control aircraft orbiting several tens of miles from a battlefield, as well as the combat aircraft delivering weapons from what has thus far been the sanctuary above an altitude of 15,000 to 20,000 feet. A new version of this system, the S-400 Triumph, with even longer range, is said to be under development specifically for the export market. These systems are now for sale to any willing buyer, including those with whom U.S. forces may become engaged in the future; China has them, and North Korea is said to be in negotiations with Russia to acquire them.
  • The sea lines of supply that U.S. forces need for logistic support are vulnerable to attack by quiet non-nuclear submarines such as the Russian Kilo-class submarines and newer Swedish and German designs that are being purchased around the world. Iran, Pakistan, China, and North Korea have such submarines, and China has several nuclear-powered submarines; China and North Korea are still building and modernizing their submarine fleets. Also, ballistic missiles, such as the Soviet-era SCUD, and maritime mines are widely available weapons that can be used against ports that U.S. ships must use.
  • U.S. forces are now designed to achieve quick victories against organized armies, navies, and air forces. But although an opponent’s regular forces might quickly collapse under an onslaught like the one that took Baghdad in less than a month, the conflict could be continued at length by irregular forces. Thus, an opponent could deny the fruits of a quick victory and raise the stresses of a long war despite the strength of U.S. forces, as the United States is learning in Iraq even as this is written.

All of these vulnerabilities except the last one can be mitigated or overcome by technical means. The United States must therefore make progress on two fronts: (1) completion of the transformation to deal with other regular armed forces and (2) the addition of necessary capabilities, many of them not ordinarily considered military functions, to meet the irregular or so-called “asymmetric” threats.

With regard to the first, the U.S. military is currently in a position to move so far ahead of any potential opposition by regular armed forces that it might forestall the effectiveness of technical countermeasures against U.S. forces for a long time into the future. Of course, this requires early increases in defense spending in anticipation of later savings. Many policymakers concerned about the growth of the defense budget favor leaving the forces essentially unchanged and meeting any newly appearing countermeasure with a counter-countermeasure. The more aggressive approach will mean completing the acquisition of the many major systems currently in development and a few that have yet to be started.

These include systems such as the Navy’s new DD(X) destroyer and Virginia-class attack submarine, the Air Force’s F-22 fighter, the Marines’ V-22 tilt-rotor vertical-takeoff airlifter, the multiservice F-35 Joint Strike Fighter, and several others. In addition, there will also have to be a costly new type of logistic ship, in keeping with the Navy’s newly developing philosophy of basing logistic support largely at sea instead of on politically and physically vulnerable land bases. It will require the cargo capacity of today’s maritime prepositioning ships, but it will also have to incorporate a flight deck and the ability to move cargo containers about on the ship and break them down into pallet-sized loads for transportation to shore. The major system developments will also have to include ship- and land-based antiballistic missile systems to meet the theater ballistic missile threat that many potentially hostile nations are building. That threat can be expected to include guided antiship ballistic missile warheads that would, by targeting both U.S. warships and logistic ships, jeopardize the essential U.S. command of the seas.

To these systems must be added extensive augmentations, improvements, and joint service integration of the networked surveillance, targeting, and communications systems to deal with the flaws in the current combat information network and to continue perfecting the new approaches to precision engagement under the Network-Centric Warfare paradigm. These improvements to the warfare information network will also entail investment in some expensive combat or quasi-combat systems, such as the Predator and Global Hawk high-altitude, long-endurance, unpiloted surveillance aircraft that are in use today; unmanned combat air vehicles designed to perform hazardous missions such as attacking and destroying the most effective long-range ground-based air defenses; piloted radar and electronic surveillance aircraft that also function as elaborate airborne command centers; and many kinds of spacecraft, such as a space-based radar and a more jamming-resistant successor to the current GPS.

Why not wait?

Although there have been no arguments about the need to enhance the combat information network and systems, including their intelligence components, there have been extensive arguments about the need for any or all of the new and advanced aircraft, ships, and ground combat vehicles. The primary objections to the new systems are that they cost too much and are unnecessary now that the United States has no enemies with the military sophistication that the Soviets possessed. But these arguments fail to account for certain realities.

It would in the long run be cheaper to stay ahead of the advancing military opposition than to try to catch up later.

First, potential opponents may field formidable armed forces to meet those of the United States. For example, North Korea remains an enigmatic but powerful threat to U.S. interests in the Pacific region. Another example in that area might be a China that, although friendly in a guarded sort of way now, could easily become a military opponent over the issue of Taiwan. That situation can blow up at any time from misunderstanding of the positions of any of the three principals–China, Taiwan, or the United States. Without U.S. fielding of forces obviously able to meet the North Koreans or the Chinese militarily, the growing capabilities of those countries could cause Japan to wonder about the military reliability of the United States as an ally. Although Japan’s constitution puts a limit on the growth of the country’s offensive military capability, the government could remove that limit if it felt threatened, and Japan has the technological capability to develop advanced weapons, possibly including nuclear weapons.

North Korea and China are but two examples of sudden military conflict that might arise in the arc of instability that reaches from North Africa through the Middle East, south and central Asia, all the way to the Korean peninsula. A third example of such a potential opponent arising without much strategic warning could be Pakistan if its government were to fall to the country’s Islamist fundamentalist factions.

This is not the place to discuss the likelihood of such threats arising, but we must take note of the potential developments that could evolve into military threats. As has been highlighted above, several of these possible opponents are actively acquiring some of the advanced Soviet-era and more recent systems that can exploit the vulnerabilities of today’s U.S. forces. And we must certainly expect that China, with its fast-growing, technology-based economy, will soon be able to field its own versions of such systems.

The problem for the United States, then, is to track and maintain superiority over the growing capability of potential military opponents. Current U.S. military systems are able to match those of such opposition now, but if the United States stands down on advancing its capability, that increasingly precarious balance could change. Worse, it might not realize that the balance had changed until it was already engaged in battle.

The argument that if the United States remains alert, it can identify developing threats in time to respond fails to recognize how long it takes to respond. It takes on the order of 10 to 20 years to field major new military systems. It can take a decade just to field a significant improvement in an existing system, such as a new aircraft or ship radar system. Yet the strategic and military need for such systems could arise in a year or two, or even as a total surprise, as the country learned at Pearl Harbor and feared throughout the Cold War.

All of the attempts to evade this reality face difficulties the U.S. has experienced but doesn’t always want to recognize. It has been proposed, for example, that new system developments be limited to prototypes that can be put into production if necessary. But there is a long path from designing and building a prototype to putting a fully militarized system into production. The military aspects of the design must be finalized, and the production engineering and production machinery must be designed and built. The final product usually turns out to be quite different from the prototype in many respects. As an example, the F-35 is three years into this evolution and it is not completed yet.

Nor can the United States simply plan to produce but a few of the new systems to keep its hand in, as it were. Because the price of every system unit produced must include a share of the costs of the research, development, testing, and production tooling that brought the system into being, the smaller the production quantity of the system, the higher its unit cost will be. We have seen, with systems such as the B-2 bomber and the F-22 fighter, that the high unit cost itself becomes an economic and political issue that leads to increased calls for system cancellation.

All of the next-generation system acquisition can be viewed as a continuation of the substitution of capital equipment for labor in the evolution of the armed forces. For example, the new Navy destroyer is being designed to operate with a crew about a third the size of the crew on the current DDG-51­class destroyer. The incorporation of advanced stealth and electronic warfare technology in new systems will put them a generation ahead of the Russian-designed antiaircraft and antiship weapons currently being fielded by potential opponents. Because the United States would not have to reconstitute an atrophied military industrial base when a threat appears on the horizon, it would in the long run be cheaper to stay ahead of the advancing military opposition than to try to catch up should the country allow itself to fall behind it.

If the United States continues the advances in military hardware and the personnel training associated with it, the “existential deterrence” value of the forces (simply the knowledge of U.S. military strength and commitments) will be sustained. This will certainly help the United States and its allies retain confidence in their national security.

After the “regular” war

The second direction in continuing force transformation must be to prepare to meet and defeat the kinds of terrorist and guerrilla opposition that could drain rapid military victories of their political and strategic advantages. Guerrilla warfare is a way for weak forces to take on strong ones; to make their governing and security positions untenable. They do so by attacking at times and in places where the stronger forces’ guard is down or where they can be outnumbered locally, and by attacking the infrastructure on which the population they are supporting and protecting depends for its livelihood and welfare.

The term “terrorism” masks a different phenomenon that shares with guerrilla warfare the need to overcome much stronger opposition. Whereas guerrillas focus on the local infrastructure and armed forces, terrorists focus almost exclusively on people, and mainly on civilians at that. In addition to trying to disrupt and discourage the U.S. military’s local efforts directly, they play on the Western world’s high valuation of human life to create internal dissent that is even more powerful in meeting their purposes. They attack in situations where many local civilians will be killed, and they deliberately set up situations in which U.S. and allied forces will kill innocent civilians or insult local religious sensibilities. For example, they fire at U.S. troops from the middle of a crowd with intent to draw return fire, and they set up weapon positions in mosques or (as in Vietnam) in temples to draw fire on them.

Clearly, the U.S. armed forces’ ability to overcome opposing regular forces quickly doesn’t address these problems, which arise only after a victory or even where there is a non­conflict-related U.S. or Western military or civilian presence in the form of military bases, embassies, or simply large tourist attractions with many Western visitors. Protection against terrorist attack on the U.S. homeland is not the task of the U.S. military, unless the president orders it. However, U.S. military forces must be able to protect themselves from terrorist attack wherever they have a presence on the globe, and they must be able to prevent terrorists and guerrillas from turning a rapid military victory into a drawn-out and politically unpopular conflict. To achieve this, the local population and infrastructure must be protected; the guerrillas and terrorists must be found, attacked, and defeated; and the adverse conditions that may have led members of the local population to support them must be changed.

The precision engagement capability that enables the defeat of opposing regular forces aids the objective of reducing casualties in the local civilian population as much as possible. The approaches to protecting the population and attacking the guerrillas and terrorists are similar in a local area. Intelligence is of prime importance. Some of it will be fed to military forces from the outside, but they must also be able to gain and act on local intelligence themselves. This requires knowledge of the local area and languages, at least on the part of some of our military units. It also requires the ability to relate to the population and to any local forces with which the armed forces will work. That, in turn, means extensive training in the local culture and practices. It may mean not rising to the provocation of hostile fire from crowds or sacred places, even at the expense of giving the guerrillas and terrorists an apparent sanctuary.

It is in this last objective–that of changing the situation and outlook of the local population so that they will not support the irregulars–that the task for the U.S. military differs vis-à-vis the guerrillas and the terrorists. For if the terrorists are outsiders who have joined a local conflict because it offers them an opportunity to strike at U.S. interests and those of its allies, their reasons for doing so will not be affected by anything U.S. forces do with the local population. That is a national task that begins in Washington; it involves U.S. global foreign policy and its presence and participation in the global economy. However, in the cases of both local guerrillas and outside terrorists, U.S. military forces in place should try to gain intelligence from the local population that will, in turn, help protect both of them on the spot. To do this, they must earn a measure of that population’s confidence.

What might the local population want after a war has been fought, their army defeated, and their cities and countryside occupied by U.S. and allied forces as conquerors? Once they determine that those forces are not unremittingly hostile and repressive toward them, they will probably want to go about the business of daily living: doing useful work and getting paid for it; having their self-government restored; and having a reliable infrastructure of roads, bridges, communications, electricity, water, food, schools, and medical care. This means that the military forces also need the capability to establish and support local government and to communicate with the local population and its leaders. They will also have to perform construction tasks that are usually the province of civilian engineering firms.

It thus appears that to minimize or obviate the possibility of residual opposition that can turn a fast military victory into a long war of attrition, the active U.S. military forces will have to be expanded to include a capacity for establishing a mixture of local governance, police-oriented security, and rapid infrastructure engineering and construction.

All this comes under the controversial rubrics of peacekeeping and nation building. Why load it all on the military forces? To begin with, they are likely to be in place where they are needed. If they are not, or are not there in sufficient numbers, they can move quickly to be there. They are trained, they are disciplined, they can marshal resources rapidly, and they can fight if need be. No civilian organizations can meet all those conditions as effectively. Indeed, if we try to use the latter under conditions of guerrilla and terrorist combat, as we learned in Afghanistan and are learning in Iraq, the military must be assigned to protect them in any case, and the work does not get done as rapidly or as effectively as the military can do it. Also, by helping the local population adjust to its new situation quickly, the U.S. military can reduce local support for guerrillas–drying up the sea in which they swim, in Mao Tse Tung’s famous analogy. These circumstances might even induce many of the local guerrillas to decide to give up the fight. And if the local population decides that supporting transnational terrorists who have entered the fight is not in its interest, that decision could increase the flow of intelligence that will help defeat them.

On the assumption that in any conflict requiring such forces we will be involved with allies, the issue may arise as to whether we should rely on allies to perform the nation-building and peacekeeping tasks, in part because they may be better versed in the local culture and language and in part because they will represent a force augmentation that is cost-free for the United States. However, U.S. forces still need these capabilities for the times immediately after defeating regular military opposition when allies’ forces may not be readily available; for sectors U.S. forces will control exclusively; and for U.S. forces to be able to work effectively with allies who will also have different cultures, outlook, and training.

The bottom line

All of the needs and tasks sketched above for local post-conflict peacekeeping and reconstruction will add to the resources that the military needs. How much will it cost? An accurate estimate would require a careful review of capabilities already in the services to ascertain what must be added. For example, some of the needed capability is already embedded in the active military forces in the form of Special Operations Forces trained to work with local forces and populations and of combat engineers in all the services. Some augmentation of the engineers to deal with specialized tasks such as restoring damaged power grids might be needed, and active-duty civil affairs units including military government, police, and related functions would have to be added. If the equivalent of about one battalion per Army and Marine Division were to be required to cover all contingencies, the cost might come to between $2 billion and $3 billion per year. If only deployed forces were considered, the cost could be less.

The U.S. defense budget of nearly $400 billion increased by about 8 percent in real terms between fiscal year (FY) 2002 and FY 2004 and is projected by DOD to grow between 2 and 3 percent, or around $10 billion, per year for the next several years (not including as-yet-unspecified costs of the war in Iraq). In a defense budget of $400 billion, the amount required for the additional forces and their training might seem relatively small. But in the current tight budget climate, competition for federal funds will be intense, and there will be pressure to reduce the defense budget by canceling one or more of the major systems that are viewed as Cold War carryovers.

However, as we have seen, such a move could prove to be penny wise and pound foolish in the long run, because it would risk making U.S. forces relatively weaker in the face of potential military opposition. Moreover, the cancellation of a major system acquisition might not save the total projected system cost in the overall service or defense budget. The Army, for example, cancelled its Cheyenne combat helicopter after nearly 20 years of development and expense, saying that it doesn’t need the aircraft now that the conditions occasioned by the Soviet threat have disappeared. But one of the additional reasons given for the cancellation was that the Army wants to use the money for needed modernization of its existing helicopter fleet as well as to advance its Future Combat System.

Roughly half of the defense budget pays for personnel. Therefore, some savings might be inherent in the substitution of capital for labor as the future forces evolve, depending on how the personnel/equipment mix develops. Then, the overhead must be thoroughly scrubbed for unneeded bases and other Cold War legacies, which will be even more difficult politically than canceling major system acquisitions. Beyond that, the only practical way to make major changes in the defense resources required is to change the size of the forces. If the U.S. public wants to save additional money on defense, the United States would have to reduce forces proportionately, and as that is done, elements of risk in national security and national military strategy will increase accordingly.

Thus, the construction of America’s armed forces to meet future threats to U.S. security on all fronts will in many ways be more difficult than it was during the Cold War. U.S. forces can be kept demonstrably superior to the organized armed forces of any potential national opponents who might appear, but only if they remain in a cycle of continual renewal in anticipation of that opposition. To this capability must be added the functions, costly in people, that are needed to meet the asymmetric threat in theaters of war. Moreover, U.S. forces will have to work with allies who face similar problems but are not devoting nearly as much of their resources per individual to the armed forces and are not likely to do so. As we look to the future operations of U.S. armed forces, it appears that the United States is entering a period analogous to the Cold War, but in a new and more difficult strategic context.

Forum – Summer 2004

Preventing nuclear proliferation

Michael May and Tom Isaacs’ “Stronger Measures Needed To Prevent Proliferation” (Issues, Spring 2004) is a significant contribution to the ongoing discussion on how best to strengthen the Nuclear Non-Proliferation Treaty (NPT) Regime, the centerpiece of international security.

The authors’ suggestion to develop a protocol to improve the physical security of weapons-usable material is entirely laudatory. The Convention on Physical Protection, important to efforts to protect such materials from terrorist acquisition, applies only to materials in international transport. It is necessary to establish stringent and uniform standards that are applicable to such materials wherever they are found worldwide. Some progress has been made, but this will be a difficult task.

The authors are correct to stress the importance of the International Atomic Energy Agency (IAEA) Additional Protocol for enhanced NPT inspections. It will make NPT verification more effective. However, I do not agree that sensitive exports should be allowed only to states that have approved the protocol; to date, only about 20 percent of NPT parties have ratified it. Rather, the immediate effort should be to persuade all NPT parties to join or for the IAEA to make the protocol a mandatory part of the Safeguards System.

With respect to the authors’ suggestion to minimize the acc

In addressing the section of the article headed “Reducing the demand for nuclear weapons,” all of which is well taken, I would make just two comments. First, it is indeed important to expand and strengthen security assurances for NPT non-nuclear weapon states that are in compliance with their obligations. Second, the NPT Regime, and nuclear nonproliferation policies generally, in the long run are not going to succeed unless the nuclear weapon states, particularly the United States, stop hyping the political significance of nuclear weapons and begin to take serious measures to reduce their political value and thereby their attractiveness. The NPT simply will not remain effective unless the nuclear weapon states live up to the disarmament obligations in their half of the NPT basic bargain, which means most importantly in the near term, ratification and entry into force of the Comprehensive Test Ban Treaty.

THOMAS GRAHAM, JR.

Special Counsel

Morgan Lewis & Bockius LLP

Washington, D.C.

Thomas Graham, Jr. is former Special Representative of the President for Arms Control, Nonproliferation and Disarmament.


Michael May and Tom Isaacs make a convincing case that stronger measures are needed to fight nuclear proliferation. Their call for an updated Atoms for Peace effort is especially important today, because a significant increase in nuclear energy use is needed to solve many of the energy and environmental challenges ahead, and the need to secure nuclear materials has become more urgent with the advent of megaterrorism. I agree that the fundamental problem is not a lack of ideas, but rather one of inadequate priorities and, in my opinion, ineffective implementation.

Here I address the first of their three components of an enhanced effort–materials control and facilities monitoring–because it is the most urgent, although the other two–effective international governance and reduction of demand for nuclear weapons–are important as well. The following two actions require urgent attention while the international community takes up the list of security measures proposed by the authors:

1) Immediate steps to enhance nuclear security by all governments that possess weapons-usable materials. Recent revelations about the irresponsible proliferation actions of A. Q. Khan put Pakistan at the top of the priority list. Its government must take immediate steps to assure itself and the world that its stockpile of fissile materials is secure and that the export of nuclear materials and technologies has been halted. After a decade of U.S. assistance, Russia must finally step up to its responsibility to provide modern safeguards for its huge stockpile of fissile materials. Likewise, the countries possessing civilian materials that can readily be converted to weapons-usable ones must redouble their efforts to secure such materials at every step of the fuel cycle.

2) Weapons-usable materials should be promptly removed from countries with no legitimate need for such materials. It’s time to find an acceptable path to eliminate the nuclear weapons efforts of North Korea and Iran. Likewise, weapons-usable materials should be promptly removed from Kazakhstan. Much greater priority must be given to removing highly enriched uranium from research reactors in countries that cannot guarantee adequate safeguards. In most of these countries we should facilitate the closure of such reactors rather than their conversion to low-enriched uranium.

These stopgap measures must be accompanied by longer-term actions such as those outlined by the authors. We must revisit the safeguards necessary to enable Eisenhower’s vision of plentiful energy for all humankind. Today’s security concerns suggest a three-tiered approach to expanding nuclear energy: Nations with a demonstrated record of stability and safeguards may possess nuclear reactors and full fuel-cycle capabilities. The next tier of nations may not possess fuel-cycle facilities but may lease the fuel and/or nuclear reactors. The last tier has no access to nuclear capabilities but imports electricity from reactors in the region. In addition, the International Atomic Energy Agency and the United Nations Security Council must be given the proper enforcement authority to deal with those who violate the international nonproliferation norms.

SIEGFRIED S. HECKER

Senior Fellow, Los Alamos National Laboratory

Los Alamos, New Mexico


Michael May and Tom Isaacs provide a thorough, thoughtful review of contemporary nuclear proliferation issues and a comprehensive series of sensible proposals that could provide the basis of an expanded nuclear energy regime. Why would we need such an expanded regime? Many of the benefits of nuclear power are discussed in “The Nuclear Power Bargain” by John J. Taylor in the same issue of Issues, but there is one benefit that stands out: Over the next century, a large increase in nuclear power could provide a major ameliorant to the threat of global climate change. But because the industrial processes of the nuclear energy fuel cycle are similar to those of the nuclear weapon fuel cycle, a large increase in nuclear energy would further increase concern about the proliferation of nuclear weapons. Some simple numbers make this point.

The current global supply of electrical energy is 1650 gigawatt-years (GWyr), with fossil fuel at 1065, nuclear at 275, and hydroelectric at 375. If we assume, conservatively, that electricity demand will triple in the next 50 years, then global supply will reach about 5,000 GWyr. A very optimistic assumption for the level of renewable and hydroelectric power would be 1,000 GWyr, leaving the balance between fossil and nuclear. To hold the level of fossil fuel­generated electricity to 1,000 Gwyr, the amount of nuclear power must grow to 3,000 GWyr, or an increase by about a factor of 10. This would be good for the environment, but–assuming current light-water reactor technology–such an expansion of nuclear power would produce enough spent fuel to fill a Yucca Mountain Repository every year and require a uranium enrichment industry that could also produce over a 100,000 “significant quantities” of highly enriched uranium suitable for nuclear weapons.

Thus, any expansion of nuclear power must deal simultaneously with the issues of spent fuel and proliferation. Recycling spent fuel with the well-developed once-through PUREX/MOX process can reduce the amount of spent fuel by around a factor of two, but at the expense of creating a recycling industry capable of producing many significant quantities of plutonium suitable for nuclear weapons. Advanced fuel cycles incorporating fast reactors that burn the waste from thermal reactors show promise for reducing the radioactive material storage problem by orders of magnitude and would not produce separated plutonium. Such a fuel cycle has yet to be demonstrated at scale, would be decades away from significant deployment, and would require an increased enrichment industry for the associated thermal reactors. In short, although new technologies for proliferation resistance and spent fuel management can help, any large increase in nuclear power will require a new international institutional arrangement to alleviate proliferation concerns.

May, Isaacs, and Taylor (and many others) suggest that fuel leasing could be such an international arrangement. This concept would require nations to choose one of two paths for civilian nuclear development: one that has only reactors or one that contains one or more elements of the nuclear fuel cycle, including recycling. Fuel cycle states would enrich uranium, manufacture and lease fuel to reactor states, and receive the reactor states’ spent fuel. All parties would accede to stringent security and safeguard standards, embedded within a newly invigorated international regime. Reactor states would be relieved of the financial, environmental, and political burden of enriching and manufacturing fuel, managing spent fuel, and storing high-level waste. Fuel cycle states would potentially have access to a robust market for nuclear reactor construction and fuel processing services.

Fifty years ago, President Eisenhower suggested that the threat of nuclear war was so devastating that it was critical to create an international community to control fissionable material and to fulfill the promise of civilian nuclear technology. Much of Eisenhower’s vision was realized, but material control remains a vexing problem. Today’s nuclear issues center on proliferation, continued growth in electricity demand, and global climate change; embedded within a far more complex international political infrastructure. The concept of nuclear fuel leasing would appear to provide a mechanism for substantial nuclear power growth and a framework for enhanced international cooperation.

VICTOR H. REIS

Senior Vice President

Hicks & Associates

McLean, Virginia

Victor H. Reis is former assistant secretary for defense programs in the U.S. Department of Energy.


Michael May and Tom Isaacs nicely summarize the materials control, governance, and nuclear weapons policy options needed to strengthen Nuclear Non-Proliferation Treaty (NPT) implementation. The core issue is controlling weapons-usable fissile material, highly enriched uranium and plutonium, and the associated production technologies, enrichment, and irradiated fuel reprocessing, respectively. In light of the growing interest in building nuclear power plants in developing economies, several versions of the fuel-cycle services concept are being advanced [for example, the Massachusetts Institute of Technology (MIT) report on The Future of Nuclear Power, and the proposals of International Atomic Energy Agency (IAEA) Director General ElBaradei]. This basically entails institutionalization of an assured fuel supply and spent fuel removal for countries foregoing enrichment and reprocessing, along with acceptance of the IAEA Additional Protocol.

Spent fuel removal will be a considerable incentive for many countries, especially those with relatively small nuclear power deployments. However, implementation requires that the spent fuel go somewhere, putting a spotlight on the unresolved issue of spent nuclear fuel (SNF) and high-level waste (HLW) disposal. There remains an international consensus that geological isolation is the preferred approach and that the scientific basis for it is sound. Nevertheless, after several decades, implementation has not yet been carried out anywhere, and public concerns present major obstacles in many countries. The U.S. R&D program is narrowly focused on Yucca Mountain and does not have the breadth or depth to support a disposition program robust enough to address a major global growth in nuclear power. Nor has a policy on international spent fuel storage been established. Progress on SNF/HLW management is essential for the fuel-cycle services approach to NPT implementation.

May and Isaacs note “the debate, almost theological in nature, between adherents of the once-through cycle and those of reprocessing.” Although the debate is much more pragmatic than suggested by this remark, the fuel-cycle issue is quite germane to our discussion. Indeed, long-term waste management (meaning beyond a century or so) today provides the principal rationale for advocating the PUREX/MOX process or more advanced closed fuel cycles that are still on the drawing board. However, for the first century, the net waste management benefits of closed fuel cycles are highly arguable (see the MIT report). In addition, PUREX/ MOX operations as practiced around the world have led to the accumulation of 200 metric tons of separated plutonium, enough for tens of thousands of weapons. This manifests the proliferation risk that led to the schism of the 1970s between the United States and several allies, with the United States advocating the once-through fuel cycle, followed by geological isolation of SNF in order to avoid the “plutonium economy.” In addition to this plutonium accumulation, operational choices further exacerbate proliferation concerns; for example, plutonium is transported over considerable distances from the French reprocessing plant at La Hague to fuel fabrication plants in southern France and in Belgium. Of course, the once-through fuel cycle also poses proliferation issues if the plutonium-bearing SNF is not disposed of in a timely way. In other words, neither fuel cycle is functioning today in a way that would support the fuel-cycle services approach in the long run. This needs to be fixed.

Finally, we note that a dialogue between the United States and Russia several years ago, never consummated with a signed agreement, may provide a template for progress. The discussion took place in the context of U.S. concern about Russian fuel-cycle assistance to Iran. Relevant elements of a cooperative approach included Russian supply of fresh fuel to Iran and spent fuel return (Russian environmental law has been modified to permit such spent fuel return), no assistance to Iran with enrichment or reprocessing, a decades-long moratorium on further accumulation of plutonium from Russian commercial SNF, and joint R&D on geological isolation and on advanced proliferation-resistant fuel cycles. In effect, this would not require either to renounce its currently preferred fuel cycle, would facilitate Iran’s generation of electricity from nuclear power, and would build in a substantial “do no harm” time period for providing a sounder technical basis for informing national choices on long-term SNF/HLW management consistent with nonproliferation and economic criteria.

ERNEST J. MONIZ

Department of Physics

Massachusetts Institute of Technology

Cambridge, Massachusetts

Ernest J. Moniz is former associate director for science in the White House Office of Science and Technology Policy.


Michael May and Tom Isaacs make a convincing case for stronger nonproliferation measures without losing sight of the benefits brought about by the peaceful applications of nuclear energy. Their analysis transcends the frantic proposals of those who want to “replace the Non-proliferation Treaty (NPT) by a prohibition treaty.” Not less important from my perspective, the authors turn their back on U.S. unilateralism and recognize the need to consider the interests of other countries.

Securing weapons-grade materials in Russia, phasing out the use of highly enriched uranium in research reactors, improving the physical protection of nuclear materials and promoting the worldwide application of tighter controls by the International Atomic Energy Agency (IAEA) through the Additional Protocol are obvious yet essential measures that indeed deserve immediate attention. As to the civilian nuclear fuel cycle, the authors note quite correctly that the debate between proponents and opponents of fuel reprocessing is of a theological nature, since economical, environmental, and even proliferation differences are too small and uncertain to bother about.

May and Isaacs call for the development of more secure, but unspecified, fuel cycles. To succeed in this undertaking, one needs first, in my view, to discard the obsolete idea that “all plutonium mixtures are weapons-usable,” a taboo that hampers a sound approach to optimum fuel cycles. The proliferation risk of plutonium depends much on its isotopic composition, on the quality of the mixture. Consequently, the central proliferation risk of all civilian reactors is associated with the high-grade plutonium contained in fuel that has spent only a short time in residence; therefore, technical fixes and verification schemes should be developed to ensure that such high-quality material cannot be diverted. On the other hand, verification could certainly be relaxed on the low-quality plutonium coming out of modern nuclear plants with long fuel residence times, in particular those using reprocessed plutonium.

The internationalization of large fuel-cycle facilities makes sense in both economic and nonproliferation terms. For example, if Brazil, Argentina, and Chile would share and operate jointly the Brazilian uranium enrichment facility, these three countries would draw the benefits of a secure fuel supply for their nuclear activities while silencing international concerns. The adoption of international deep geological repositories for spent fuel would also make economic sense, especially if high- and low-quality plutonium were separated.

As director general of the IAEA, Hans Blix used to say that effective controls rest on three pillars: broad access to information, unrestricted access to facilities, and the clout of the United Nations Security Council to ensure compliance. The Additional Protocol has strengthened the first two pillars markedly; the Security Council has failed its nonproliferation mission over and over again. May and Isaacs are right to call for institutional arrangements at the NPT and Security Council level. But much U.S. leadership would be needed to achieve this ambitious objective. For failing to ratify too many international treaties, for claiming a right to stand aside and aloof, the United States has for the time being lost the credibility and the authority to define the future of nonproliferation agreements.

BRUNO PELLAUD

Icogne, Switzerland

Bruno Pellaud is former deputy director general for safeguards of the International Atomic Energy Agency.


New roles for nuclear weapons

In “A 21st-Century Role for Nuclear Weapons” (Issues, Spring 2004), William Schneider, Jr. endorses the nuclear weapons policy of the current administration as promulgated in its Nuclear Posture Review and National Defense Strategy papers. He describes the primary motive for these policies to be “dissuasion” of presently unknown adversaries from accumulating weapons of mass destruction (WMD).

Schneider asserts that if all hostile WMD stocks were held “at risk” by various means, then potential proliferators would be dissuaded from acquiring WMD, emphasizing that he distinguishes “dissuasion” from “deterrence.” Yet deterrence continues to play a central role in the U.S. nuclear posture. Any state, including a so-called “rogue,” would be deterred, as the Soviet Union was during the Cold War, from using nuclear weapons, by realizing that the very existence of the state was at stake. However, should nuclear weapons or weapons-usable materials reach the hands of sub-state terrorists, deterrence has little value against those who believe that life in heaven is preferable to life on Earth.

The implementation of Schneider’s dissuasion is to target all sites of storage or deployment of potentially hostile WMD. But can we know precisely where they are? Before 9/11, our intelligence agencies failed to provide the government information deemed to be actionable to prevent that attack. Conversely, initiation of the war against Iraq was supported by interpretation of intelligence provided to the administration concerning WMD whose very existence after 1991, let alone location, remains unsubstantiated today. Thus, I agree that intelligence collection, dissemination, and interpretation need improvement, but Schneider does not indicate how such an upgrade could be achieved to a degree sufficient to dissuade a WMD proliferator from pursuing his quest. Schneider’s statement that “confidence in the inspection provisions of the Nuclear Non-Proliferation Treaty (NPT) obscured efforts to obtain knowledge of clandestine WMD programs” hardly explains why those inspections provided information superior to that provided by U.S. intelligence.

Schneider downplays the value of international agreements, which he seriously misrepresents. He wrongly and repeatedly describes the Anti-Ballistic Missile Treaty as having been terminated by “mutual consent” of the United States and Russia. The U.S. withdrawal was a unilateral act taken over the strong objections of all interested states, including Russia. In fact, Russia in response withdrew from the previously signed START II Treaty. Schneider also errs in stating that the Bush administration “reached a bilateral agreement with Russia to institutionalize a reciprocal reduction in numbers of nuclear delivery systems and their associated nuclear payloads.” Neither is true. The termination of START II removed previously agreed limits on delivery systems, and the Moscow Treaty of May 2002 does not restrict the numbers of nuclear weapons beyond those “operationally deployed” on strategic systems. Even if it did, this treaty provides no mechanism for verification.

The NPT entered into force in 1970, with continued support by all U.S. administrations, including the current one. The 1995 Review Conference converted that treaty to one of indefinite duration: In that review, the United States agreed to ensure the irreversibility of international arms control agreements. That commitment was violated by the unilateral withdrawal from the ABM Treaty.

Schneider states that “proliferation of WMD was stimulated as an unintended consequence of a U.S. failure to invest in technologies such as ballistic missile defense that could have dissuaded nations from investing in such weapons.” But the United States has invested $130 billion in technologies to intercept ballistic missiles, which remain the least likely means by which a hostile actor would deliver nuclear weapons to U.S. soil. No terrorist would have a ballistic missile. Although rogue nations could develop such devices, with North Korea being a primary candidate, a ballistic missile has a return address clearly declared when launched. Therefore, such delivery should be deterred as it was during the heyday of the Soviet Union. But interdiction of hostile delivery of nuclear weapons by other means, not mentioned by Schneider, such as container ships, land transport, aircraft, and short range cruise missiles, as well as safeguarding the huge stocks of nuclear weapons and materials, remains undersupported.

What is lacking in Schneider’s analysis is how proliferation is affected by the administration’s emphasis on military over nonmilitary methods in addressing international issues. In the long run, proliferation of new military technologies cannot be stopped, and never has been stopped in past history, unless nations are convinced that their national security is better served without those technologies, including nuclear weapons, than with them. There is no silver bullet to achieve this result. As the dominant nation in terms of prowess, measured in conventional weapons strength, the United States should lead in initiating moves to strengthen the nuclear nonproliferation regime; above all, by deemphasizing the role of nuclear weapons in international relations. Schneider’s call for a new generation of nuclear weapons does the opposite.

WOLFGANG K. H. PANOFSKY

Director Emeritus

Stanford Linear Accelerator Center

Menlo Park, California


The new roles and requirements that William Schneider, Jr. recommends for U.S. nuclear weapons define a dangerous direction for our national security policy. In 2002, President George W. Bush stated that “The gravest danger this nation faces lies at the crossroad of radicalism and technology.” To prevent that danger from being realized, it is imperative that U.S. policy for security in the 21st century gives highest priority to keeping nuclear weapons out of the hands of dangerous leaders in rogue nations and subnational groups, including suicidal terrorists.

This will require U.S. leadership in forging a broad diplomatic collaboration between nuclear and non-nuclear weapon states that are committed to preserving and strengthening a nonproliferation regime that has recently come under severe challenge. It is difficult to think of a policy more harmful to building a consensus against proliferation than the one Schneider has recommended, which calls for developing new nuclear weapons that are advertised as more usable for limited military missions by virtue of their reduced, but still considerable, collateral damage. This proposed course of action would increase the gulf between us and the non-nuclear states, on whose cooperation we must rely to prevent proliferation, while at the same time enhancing the purported military utility of such weapons and, consequently, the motivation of those nations to acquire them.

An important first step in an effort against proliferation is continuing the moratorium on underground nuclear tests en route to a Comprehensive Test Ban Treaty, a goal strongly endorsed by many of the 185 nations (out of a total of 189) that committed themselves at the United Nations (UN) in 1995 to extending the Non-Proliferation Treaty (NPT) into the indefinite future.

The United States and its allies should give priority to bringing into force a number of measures we have already endorsed for ensuring compliance with the NPT and call for UN sanctions to be enforced in cases of failure to comply. These include:

  • The Additional Protocol permitting onsite challenge inspections of suspicious activities, such as those now being carried out in Iran
  • The Proliferation Security Initiative for interdicting shipments of nuclear technology in violation of the NPT, such as the recent shipments of equipment for enriching uranium intercepted en route to Libya
  • Expanding the Nunn-Lugar Cooperative Threat Reduction program to provide secure protection for existing nuclear stockpiles, located mainly in the former Soviet Union, that contain weapons-grade fuel for approximately 100,000 nuclear bombs
  • Guaranteeing supplies of nuclear fuel to non-nuclear weapon states for peaceful purposes, to be provided from regional sources under international control. This would be a substitute for their possessing an indigenous nuclear fuel cycle capable of rapidly developing nuclear weapons should they break out from the NPT.

A U.S. commitment to build new, low-yield, and allegedly more usable nuclear weapons would be a bad idea. On technical grounds, such weapons would have limited effectiveness against the military targets frequently cited as reasons for deploying them: hardened and deeply buried underground bunkers and biological agents. A more serious consequence of such a program would be its harm to U.S. diplomatic efforts to strengthen the NPT with effective verification measures against new threats.

SIDNEY D. DRELL

Senior Fellow, Hoover Institution

Stanford University

Stanford, California


Deterring nuclear terrorists

The prospect of a nuclear weapon detonated by terrorists in a U.S. city is one that needs to be taken very seriously. Michael A. Levi’s article on attribution as part of a strategy of deterrence deserves a careful reading (“Deterring Nuclear Terrorism,” Issues, Spring 2004). If such an attack should occur, we should be prepared to extract the maximum possible technical information from the debris to gain insight about the source of the highly enriched uranium or plutonium used. However, we should be clear in advance that the forensic evidence will be ambiguous, much like the evidence for weapons of mass destruction in Iraq before the current war. The nuclear material could even come from the United States or one of its allies. What then?

It is hard for most of us to imagine any terrorist cause that would justify the first use of a nuclear weapon to kill hundreds of thousands of people in the twinkling of an eye. Yet such inconceivable acts are promoted by rhetoric from madrasas in the Islamic world and from some of our homegrown hate-mongers, like those who influenced Timothy McVeigh. Filling human minds with hatred is not so different from providing terrorists with material for nuclear weapons.

No one knows how the United States would respond to a nuclear terrorist attack. Let us hope that we never have to find out. But judging from the response to all previous attacks, for example, Pearl Harbor and September 11, the United States will probably react strongly. In the present state of the world, it seems likely that television coverage after a nuclear attack will alternate between scenes of unimaginable carnage in the radioactive ruins of a major U.S. city and jubilant crowds in the streets of countries that have long tolerated or even encouraged the teaching of hatred. What will be the response of a grievously wounded United States to such countries?

As Levi suggests, the prospect of nuclear retaliation will certainly encourage nuclear states to more carefully control highly enriched uranium and plutonium. But perhaps countries that have nurtured and glorified terrorists should also be concerned about retribution.

WILLIAM HAPPER

Princeton, New Jersey

William Happer is former director of the Office of Energy in the U.S. Department of Energy.


In early 1961, John J. McCloy was named White House Disarmament Adviser. He immediately formed eight committees, one on “war by accident, miscalculation, or surprise.” I was privileged to chair that committee, which consisted of about eight superbly qualified people.

We made several recommendations. One was that a direct continuous communication channel be opened between Washington and Moscow. The idea had been around but never acted on; we gave it a push and, largely because one of our members was well positioned in the State Department, Cyrillic-alphabet teletypewriters were delivered to the State and Defense Departments in 1953. The “hotline” was established.

We also recommended that senior military officers be exchanged between North American Air Defense Command and corresponding Soviet installations, so that unusual phenomena picked up by radar might be interpreted with professional help from the other side. Nothing came of our proposal, but the same idea did emerge at high levels of our government 40 years later.

One of our most important recommendations was that we design all nuclear weapons so that if a nuclear explosion occurred anywhere in the world, we could identify certainly whether it was one of ours; even more, that we could identify, if it were one of ours, just where it came from, so that we could investigate whether more were missing and determine what lapses in security needed to be remedied.

No one on the committee was a weapons expert, but we did consult with experts to satisfy ourselves that the idea was feasible. We also thought it valuable to persuade the Soviets to be sure they could identify whether an explosion was one of their own. They could lie to us, but it was important that they be able to know themselves whether their own security was lax somewhere.

Michael A. Levi suggests using intelligence to determine the explosion debris characteristics of different countries’ weapon technologies. We had in mind inserting, or admixing, various substances that would emit recognizable isotopes as signatures for identification.

As far as I could ascertain, nothing ever came of our recommendation. But, of course, I wouldn’t know if anything ever did. I did occasionally inquire of people who I thought would know whether anything of the sort was ever done; nobody ever seemed to have considered or heard of the idea.

Until the Oklahoma City bombing tragedy. Then our idea emerged (not due to us), and the concept of taggants was broached. I believe that a National Research Council panel looked at the possibility of tagging individual batches of potentially explosive ingredients, so that when an explosion occurred it might be possible to identify the origin of the ingredients.

I thoroughly appreciate the Levi article. I have a few addenda. One is that identifying our own weapons, if they explode somewhere unauthorized, may be as important as identifying somebody else’s. Second is that a “signature” should be an integral part of the construction of a U. S. weapon; I can only hope that this could be accomplished by retrofit, if it was not done in the original fabrication. Third is that responsible nations may wish to do the same; not that we’d necessarily believe them if they denied responsibility, but so that they could know whether their own weapons had been misused and take steps to improve their security.

Maybe the Levi article can help to revive and extend the proposal of the McCloy committee on “war by accident, miscalculation, or surprise.”

THOMAS C. SCHELLING

Distinguished University Professor

University of Maryland

College Park, Maryland


How soon for hydrogen?

In “The Hype About Hydrogen” (Issues, Spring 2004), Joseph J. Romm devotes considerable energy to highlighting the challenges that must be addressed in realizing a hydrogen-based economy. As his title implies, he concludes that the world’s interest in this promising future is more about hype than reality.

At General Motors, we see the future quite differently. We believe there are many compelling reasons to move as quickly as possible to a personal mobility future energized by hydrogen and powered by fuel cells. These include substantial reductions in vehicle exhaust and greenhouse gas emissions, energy security, geopolitical stability, sustainable economic growth, and, most importantly, the potential to design vehicles that are more exciting to own and operate than today’s automobiles.

GM has demonstrated this design potential with our Hy-wire prototype, the world’s first drivable fuel cell and by-wire vehicle. We also have made great progress in testing our fuel cell technology in real-world settings. We have vehicle demonstration programs under way in Washington, D.C. and Tokyo, Japan, and are partnering with Dow Chemical on the world’s largest application of fuel cell power in a chemical manufacturing facility.

Given the fuel cell’s inherent energy efficiency, we estimate that the cost per mile of hydrogen is already close to that of the cost of gasoline used in today’s vehicles. In fact, our analyses have shown that the first million fuel cell vehicles could be fueled by hydrogen derived from natural gas, resulting in an increase in natural gas demand of only two-tenths of one percent. Our analyses also project that a fueling infrastructure for the first million fuel cell vehicles could be created in the United States at a cost of $10-15 billion. (In comparison, the cost to build the Alaskan oil pipeline in the mid-1970s was $8 billion, which equates to $25 billion in today’s dollars.)

Based on our current rate of progress, GM is working hard to develop commercially viable fuel cell propulsion technology by 2010. This means a fuel cell that is competitive with today’s engines in terms of power, durability, and cost at automotive volumes. Beyond this, GM plans to be the first manufacturer to sell one million fuel cell vehicles profitably. Like all advanced technology vehicles, fuel cell vehicles must sell in large quantities to realize a positive environmental impact. How quickly we see significant volumes depends on many factors, including cost-effective and conveniently available hydrogen refueling for our customers, uniform codes and standards for hydrogen and hydrogen-fueled vehicles, and supportive government policies to help overcome the initial vehicle and refueling infrastructure investment hurdles.

For the past 100 years, GM has been on the leading edge of pioneering automotive development –not just because we have worked the technology but, equally importantly, because we have been willing to lay out a long-term vision of the future and use our considerable resources to realize the vision. We are committed to the future–so it is not a question of whether we will be able to market exciting, safe, and affordable fuel cell vehicles, but when. All it will take is the collective will of the auto and energy companies, government, academia, and other interested stakeholders. Today, we see this collective will building toward a societal determination to create a hydrogen economy.

This is not hype. It’s reality.

LARRY BURNS

Vice President, Research & Development and Planning

General Motors Corporation

Detroit, Michigan


As Joseph J. Romm knows from his tenure with the U.S. Department of Energy (DOE), the department promotes both environmental and national energy security goals. The environment and global climate stability are top priorities, and so is reducing our dependence on foreign oil. Romm focuses exclusively on greenhouse gases from electricity generation and ignores long-term energy security.

Currently, the United States imports 55 percent of our oil from foreign sources. This is projected to be 68 percent by 2025. Transportation drives this dependence, accounting for two-thirds of the 20 million barrels of oil used daily. U.S. economic stability will be threatened as growing economies such as China and India put increased demand on finite petroleum resources.

We agree that the challenges facing the hydrogen economy are difficult, but they are not insurmountable. We can concede to these challenges and do nothing, or we can develop a long-term vision and implement a balanced portfolio of near- and long-term technology options to address energy and environmental issues. We choose to do the latter.

Romm should be aware that our near-term focus is on high-fuel-economy hybrid vehicles. The government is spending more than $90 million per year to lower hybrid component costs. However, in the long term, increased fuel economy is not sufficient. A substitute is required if we are to become more self-reliant. Romm does not offer a viable alternative to hydrogen. Hydrogen is an energy carrier that can be made using diverse domestic resources and that addresses greenhouse gases because it decouples carbon from energy use.

Romm’s article might lead your readers to believe that the Bush administration is rushing to deploy hydrogen vehicles at the expense of renewable energy research. This is simply not the case.

First, DOE’s plan calls for a 2015 commercialization decision by industry based on the success of government and private research. There are no arbitrary sales quotas or scheduled deployment targets. Only after consumer requirements can be met and a business case can be justified will market introduction begin.

Second, money is not being shifted away from efficiency and renewable programs to pay for hydrogen research. The administration’s fiscal year (FY) 2005 budget requests for research in wind, hydropower, and geothermal are all up as compared to FY 2004 appropriations. After unplanned congressional earmarks are accounted for, solar and biomass requests are also up.

Romm treats efforts to curb greenhouse gas emissions and hydrogen as mutually exclusive. This is simply not the case. In fact, the renewable community is embracing hydrogen because it addresses one of the most significant shortcomings–intermittency–of abundant solar and wind resources. Romm also acknowledges that by 2030, coal generation of energy may double. This is all the more reason to pursue carbon management technologies in projects such as FutureGen. As announced by President Bush, FutureGen will be the world’s first zero-emissions coal-based power plant. Carbon will be captured and sequestered while producing electricity and hydrogen. Nuclear energy is another carbon-free source of hydrogen.

As you can see, there are tremendous synergies in the long-term vision of producing carbon-free electricity while also producing hydrogen for cars, all while addressing climate change and energy security.

DAVID K. GARMAN

Assistant Secretary

Energy Efficiency and Renewable Energy

U.S. Department of Energy

Washington, D.C.


Joseph J. Romm’s article was a huge relief to me. As a career expert in many aspects of energy policy and technology I have been dismayed that the most basic science of hydrogen production, transportation, storage, etc. has not been addressed or at least publicized. I have listened to many presentations about hydrogen fueling and have always asked whether the thermodynamics of the entire hydrogen production and use cycle have been calculated. The answer has always been either “no” or a blank stare. Romm’s article, in effect, does this.

I would like to read or hear about the issues surrounding the sequestration of carbon from carbon dioxide. It is a companion technological question and one that must be understood scientifically and economically when trying to craft any policy or research agenda addressing future energy supply and all its ramifications.

JOE F. MOORE

Joe F. Moore is retired CEO of Bonner & Moore Associates and a member of the Presidents’ Circle of the National Academies.


Given the amount we don’t know about hydrogen as an energy carrier, it is remarkable how much we have to say about it.

I accept Joseph J. Romm’s major point that hydrogen offers no near-term fix for global climate change. But that’s not what drives the interest in hydrogen. Many current advocates seek reductions in the regional air pollutants that throttle our metropolitan areas, but without giving up our famously auto-dependent lifestyle, whereas others simply want to reduce petroleum imports.

One driver–preserving the automobile’s viability–explains the support for hydrogen among automakers, Sunbelt politicians facing excess levels of ozone, and pro-sprawl advocates. They say that if we can just give our cars and trucks cleaner fuel, we won’t have to acknowledge roles for public transit and land use regulation. Thus, we see an antiregulation U.S. president from Texas and automobile manufacturers worldwide promoting a billion-dollar hydrogen R&D roadmap, and a Hummer-driving California governor promoting an actual hydrogen highway.

Energy carriers such as electricity and hydrogen create value by transforming a wide variety of primary sources into clean, convenient, commodity energy. These energy carriers allow us to diversify our primary energy supplies and shift the mix toward indigenous resources. Electricity reversed the decline in the U.S. coal industry by preventing oil and gas from competitively displacing that dirty high-carbon fuel, and we now burn far more coal than we did at the peak of the industrial revolution. Hydrogen could become the preferred transportation energy carrier, letting coal, natural gas, nuclear fission, and other sources displace imported petroleum in automotive uses. Our ubiquitous electricity networks demonstrate that we are willing to sacrifice much thermodynamic efficiency in exchange for cleanliness and convenience at the point of use. The same may someday be true of hydrogen: This is the compelling logic of economic efficiency, not engineering efficiency.

Energy security persists as a driver of great rhetorical importance in promoting hydrogen as an energy carrier. Although the world is not yet short of petroleum, its concentration in a few politically unstable areas does have profound effects. The United States has recently demonstrated its willingness to spend a full year’s worth of world oil industry revenues on regime change in Iraq. Nothing prevents us from spending similar amounts–perhaps just as wastefully but with less loss of human life–on the development of alternative domestic energy sources and new energy carriers like hydrogen. The security argument adds geopolitical efficacy to the calculus of economic efficiency, further removing engineering efficiency from the limelight.

Needed is more diversified research funding on hydrogen production, storage, and use. Also needed are small localized experiments that give us engineering experience and investigate hydrogen’s actual economic and geopolitical value. The hydrogen economy, if it ignites, will be highly local for its first decades, just as electricity and natural gas were. The chicken-and-egg problem will take care of itself if enough experiments are conducted and if some prove successful. Only at that point will arguments over dirty (carbon-emitting) versus clean hydrogen sources become salient.

CLINTON J. ANDREWS

Director and Associate Professor

Program in Urban Planning and Policy Development

E.J. Bloustein School of Planning and Public Policy

Rutgers University

New Brunswick, New Jersey


Joseph J. Romm presents a well documented argument regarding the impracticality, from both economic and environmental perspectives, of shifting in the foreseeable future to a transportation fleet fueled by hydrogen. His analysis appears accurate and sensible, but he glaringly failed to mention the 800-pound gorilla: nuclear power. Until the United States generates most of its electricity from nuclear power plants, reserves its natural gas supplies mainly to meet home and industrial heating needs, increases the overall efficiency of its liquid-hydrocarbon-fueled transportation fleet, and meets the chemical industry’s needs mainly with coal and biomass feedstocks, it will not have a credible energy policy.

Such a shift in domestic energy utilization would require no massive breakthroughs in science, technology, or infrastructure, and would drastically reduce per capita CO2 emissions (along with sulfur, nitrogen, and other emissions) while greatly reducing our dependence on imported hydrocarbons. More important, such a shift could be easily and gradually implemented through selective legislation, taxes, and tax credits, without posing a serious threat to the overall economy and allowing the free enterprise system to maximize the overall benefit/cost ratio. It appears to be the U.S. destiny to lead the world economically and technologically into the 21st century, and it is the nation’s responsibility to do so sensibly and aggressively. It must demonstrate that a democratic and technologically advanced society can enjoy the fruits of freedom without fouling its own nest and everyone else’s at the same time.

I am quite certain that an accurate and comprehensive analysis of overall environmental, safety, and health effects would overwhelmingly favor nuclear power for domestic electricity needs, and equally certain that the most sensible route to drastically reduced CO2 emissions lies in conservation. I believe it is the responsibility of the federal government to educate the public effectively and honestly regarding the benefits, costs, and consequences of current and proposed energy sources. Federal R&D funds should be used to bolster this case, demonstrating improvements in safety, efficiency, and the environment across the entire range of fuel production and utilization.

DAVID J. WESOLOWSKI

Oak Ridge National Laboratory

Oak Ridge, Tennessee


I have enormous respect for the analytical ability of Daniel Sperling and Joan Ogden, who have set forth a strong rationale for their long-term “Hope for Hydrogen” (Issues, Spring 2004). My problem is that their conclusion is even more apt for the short term. The public interests of America in reducing our dependence on oil from nations that hate us and abating global warming can’t afford to wait for a fuel-cell car, which has been 15 years away for the past 15 years.

The assumption that hydrogen is or must be decades away is the false premise of both the academic proponents of hydrogen and the self-appointed protectors of the environment, who assume that this nation is incapable of mounting a “Moon-shot”­type initiative for renewable hydrogen. They both fall for the automobile/oil industry’s “educational” effort that has made hydrogen and the fuel cell linked at the hip. They are not!

The internal combustion engine, with relatively minor adjustments, can run quite well on hydrogen. In fact, an internal combustion engine, when converted to hydrogen, is 20 to 25 percent per more efficient. A hydrogen hybrid vehicle is not a distant dream (as is the fuel cell) but a present reality if the public and political leaders were really educated on this subject. For example, the Ford Motor Company unveiled their Model U, a hydrogen-hybrid SUV with a range of some 300 miles per fill-up, more than a year ago.

A key question is where the hydrogen originates. If it’s from domestic fossil fuels, as Sperling and Ogden as well as the critics of hydrogen assume, it’s not useful for carbon reduction but does reduce oil imports. But if the hydrogen originates in water, it is super-plentiful; and if solar, wind, geothermal, or biomass is used to generate the electricity to split the water, a carbon-free sustainable energy source exists.

Let me explain why I believe that the real-world facts of life (and death) make a compelling case for starting the hydrogen revolution at once. The issues that could be alleviated by substituting renewable hydrogen for oil in the transportation sector are the following:

Reducing our dependence on imported oil. No one really doubts that we are at war in significant part because of oil. Petrodollars have funded the terrorists. America must look the other way at Saudi Arabia because of our dependence on their ability to raise or lower the price of oil with their spare capacity. The national security threat of oil dependence is a clear and present danger. More efficient cars are necessary but insufficient. Until we start building cars without oil, the increasing populations here (and in China and India) will control our destiny.

Global warming. The issue is a well-known serious threat to all humankind. A renewable hydrogen economy would be carbon-free. But “Hope for Hydrogen” says that hydrogen is not competitive and would deliver fewer benefits than “advanced gasoline and diesel vehicles.” This statement ignores the benefits of zero-oil vehicles to reduce oil imports, and it assumes that hydrogen must come from fossil fuels. The answer–renewable hydrogen–is assumed to be decades away. And it will be unless we recognize that the renewable resources and the technology to harness them are much closer to commercial reality than the fuel cell. What is lacking is a sense of necessity and the leadership to mount a “can-do” initiative.

Local air pollution. Gasoline and diesel continue to be serious sources of local air pollution. Burning hydrogen creates water vapor and nitrogen oxide that can be controlled to near zero levels. There are no particles. It’s a clear benefit.

The hope for hydrogen is not a distant dream. It could be a reality in this decade. We need to take the discussion out of the hands of people who see only the problems–and they are real–but don’t see the vital need and opportunity to overcome them in 5 to 10 years, not decades. There is a legitimate fear that we may drift into fossil/hydrogen energy. The best way to avoid it is to promote renewable hydrogen. A solar/hydrogen initiative of Moon-shot intensity is the answer. No one can say for sure it can’t be done, starting now, unless we try.

S. DAVID FREEMAN

Chairman

Hydrogen Car Company

Los Angeles, California

S. David Freeman is former chief executive of the Tennessee Valley Authority and the New York Power Authority.


The debate over whether hydrogen is hype or hope has reached new levels of hype itself. There are important technical, economic, environmental, and policy questions at hand. Their honest answers may be vital to our transportation future.

Opponents correctly point to the major technical and economic hurdles that hydrogen and fuel-cell vehicles must overcome to be a market success. They also remind us that a hydrogen future is not guaranteed to be a clean future. But the critics’ warnings that clean hydrogen production will divert valuable natural gas fuel and renewable electricity from the power sector in the near term seem at odds with their assertion that the hydrogen fuel-cell vehicle market is decades away.

Hydrogen is clearly being used in policy circles to deflect the pressure to take meaningful action today to curb global warming emissions from transportation; this is standard political operating procedure, however unfortunate. But there are much larger political obstacles in the way of sensible policies to promote readily available efficiency technologies than the prospect of hydrogen.

Proponents of hydrogen correctly point to the long-term environmental gains achievable from fuel-cell vehicles if the hydrogen is produced with clean low-carbon sources such as renewable electricity or biomass. Efficiency is a vital first step, but it alone is not enough to address the threats of climate change and oil dependence. Proponents also emphasize that automakers have rarely exhibited so much enthusiasm for an alternative to business as usual. Large automaker research (and public relations) budgets alone are not a justification for hydrogen fuel cells, but they are a necessary component of the transition.

Focusing exclusively on hydrogen as the only long-term solution, however, is too risky given the importance of addressing the energy and environmental impacts of transportation. And suggesting that hydrogen fuel-cell vehicles can meaningfully address our transportation problems nationally within the next two decades is both unrealistic and dangerous.

Renewable hydrogen-powered fuel-cell vehicles offer one of the most promising strategies for the future, and we cannot afford to pass it up. But we must also move forward with the technologies at hand today if we want to reduce pollution and oil dependence. The choice is not either efficiency or hydrogen. The right choice is both.

JASON MARK

Director, Clean Vehicles Program

Union of Concerned Scientists

Washington, D.C.

Federal R&D funding


“A Revitalized National S&T Policy” by Jeff Bingaman, Robert M. Simon, and Adam L. Rosenberg (Issues, Spring 2004) diagnoses a disease–the underfunding of long-term civilian R&D–and proposes some cures. The diagnosis is accurate and the consequences of the lack of a cure are even more dire than outlined in the article.

I like the authors’ description of science and technology (S&T) as “the tip of the spear” in the creation of high-wage jobs. I would have said that it is federally funded S&T that is the tip of the spear, because almost no long-term R&D is now conducted by U.S. industry. Instead, the venture capitalists scour the university and national laboratory scene for potential innovations and incubate the most promising. Six or seven out of 20 will fail and, of the rest, much of the intellectual property gets licensed to or bought by industry. There are the occasional blockbusters like Google or Cisco Systems that keep the venture capitalists eager to seek out new things. It is these innovations that create the high-wage jobs.

The change in the competitive environment that has driven long-term research out of industry, and the remedies required to keep the engine of innovation running, have been well described in two recent reports to the president from his own President’s Council of Advisors on Science and Technology (Assessing the U.S. R&D Investment, October 2002, and Sustaining the Nation’s Innovation Ecosystems, Information Technology and Competitiveness, January 2004). However, the administration has chosen to ignore the advice of its own committees and to proceed with a de facto S&T policy that emphasizes only the military and shortchanges the civilian sector. The administration points out that R&D in the federal budget has never been higher. That is indeed true, but a look behind the rhetorical curtain shows that it is “D” that is way up, dominated by the military, and increases in “R” are negligible.

The remedy proposed by Bingaman et al. is congressional action. Although the analysis in the article is agreed to by many in Congress, the pressures on the budget from the deficit and military needs make it unlikely that administration priorities will be changed by congressional action.

For the longer term, the article recommends increasing athe number of deputies at the President’s Office of Science and Technology Policy (OSTP). However, the problem now is not insufficient voices in OSTP but deaf ears in the White House.

I like the suggestion that the congressional budget committees take a more unified look at the entirety of the federal research budget rather than the piecemeal consideration it gets now because of its spread over several budget functions. It would be even better if the regrettable dissolution of the congressional Office of Technology Assessment were reversed. Congress badly needs a nonpartisan office that can address in depth its concerns on many S&T issues.

BURTON RICHTER

Director Emeritus

Stanford Linear Accelerator Center

Menlo Park, California


Saving Earth’s rivers

I thank Brian Richter and Sandra Postel for highlighting, in their book Rivers for Life and their article Saving Earth’s Rivers (Issues, Spring 2004), a growing awareness of the true costs of water-related development. Globally, we have manipulated river flows, abstracted ground and surface water, and moved water between catchments, thinking only, until recently, of the benefits and direct construction and operation costs.

We have rarely included in our calculations the loss of estuarine, coastal, floodplain, and river fisheries; the reduced life of downstream in-channel reservoirs due to sedimentation; the loss of land and infrastructure through increasingly severe floods and channel changes; the loss of food-producing coastal deltas and estuaries due to saltwater intrusion; the disappearance of aquatic ecosystems that wildlife have used for thousands of years; the increase in toxic algal blooms and decline in irrigation-quality water; and the need for ever more flood control dams and water purification plants. Nationally and internationally, these not-so-hidden costs are largely unquantified but undoubtedly extremely high.

In developing countries, the impacts of such river changes are likely to be higher and more immediately devastating than in developed countries. Hundreds of millions of rural people in developing regions rely directly on rivers for subsistence–for fish, wild vegetables, medicinal herbs; grazing, construction materials, drinking water, and more–as well as having complex religious and social ties with the rivers. They are rarely the people who will benefit most from a major water development, and indeed it may lead to them losing what few benefits they already gain from the river.

We need a new approach to managing rivers, one that is not antidevelopment but pro­sustainable development. An approach that considers all the likely costs and benefits, including those that will be far removed in space and time from the development, so that truly balanced decisions can be made on whether and how to go ahead. We need politicians and development funding bodies who will embrace this more transparent approach; an informed public that will push them; and water scientists who will leave their quiet sanctuaries and step forward to help, managing their scientific uncertainty while being determined to play their part in saving Earth’s rivers. We need to recognize that rivers will change as their water supplies are manipulated and work together to decide how much change is too much.

All of these elements exist in isolated pockets here and there across the world. But more is needed–much more. How can we who are specialists in relevant disciplines help countries get started? Help them understand that managing the health of their rivers is not a luxury, a threat, or a restriction, but rather a way of ensuring balance and quality in their lives, of giving them the power to control their future instead of being its victims, and of helping them avoid developments that bring more costs than benefits? I feel we need a concerted international effort to give the “managed flows” approach global legitimacy: There are several international bodies that might be willing to act as an umbrella organization for such a move. I would welcome feedback from professionals in the water or related fields on possible ways to take this forward.

JACKIE KING

Freshwater Research Unit

University of Cape Town

Cape Town, South Africa


Energy futures

In “Improving Prediction of Energy Futures” (Issues, Spring 2004), Richard Munson explores how energy models might be improved and how their predictions might be better used in policymaking. Clearly, there are settings in which short-term policy choices should be informed by energy model predictions. But in considering longer-term issues, such as policies for managing greenhouse gas emissions or investments in basic technology research, it is easy to forget that prediction is not always the best objective in modeling.

Efforts to model the energy system many decades into the future often produce results that look like a spreading fan, with growing contributions from a range of different-generation technologies (gas, coal, nuclear, renewables). Perhaps this is how the future will actually unfold. However, the historical evolution of many technologies argues for caution. Single technologies have sequentially played a dominant role in many sectors of the energy system. Horses and coal-fired rail dominated transportation in most of the 19th century. Today, virtually all U.S. highway traffic, and most U.S. rail traffic, employs petroleum-based fuel and internal combustion engines, although several other technologies, such as steam cars and gas turbine locomotives, were once viewed as contenders. Over a similar period, street lighting has moved from whale oil, to natural gas, to incandescent electric, to high-pressure mercury vapor, and now largely to high-pressure sodium vapor lamps. In short, different single technologies have dominated the market over time. Although we lack a robust theoretical understanding of the evolution of energy systems, it is likely that three important nonlinearities–economies of scale, learning by doing, and presence of network externalities–have played an important role in generating this one-big-winner-at-a-time pattern of technological evolution.

Modelers and policymakers also need to better appreciate that larger-scale societal properties and system architectures are often not chosen by anyone. Rather, they are the emergent consequence of a set of seemingly unrelated social developments. For example, in the 19th century, nobody decided that all cities should have sanitary sewers. Rather, individuals installed running water before sewer systems were common. As historian Joel Tarr has noted, this created a major problem that few had thought about or anticipated, precipitating the need for municipalities to scramble to install sewer systems. In the 20th century, nobody decided that natural gas infrastructure should be developed to support home heating in cities like Pittsburgh. Rather, this was an emergent consequence of the construction of oil and gasoline pipelines from the southwest to the northeast to avoid submarine attacks on tankers and ensure fuel supplies during the World War II, the conversion of those lines to natural gas in the face of concerns about international oil prices in the postwar period, and the need to find a home heating fuel that was cleaner than soft coal in order to address an air pollution problem that had grown to critical proportions. Much of our infrastructure and many of today’s social systems have emerged in similar ways.

Even when policy choices are made with the specific objective of shaping the energy system, the long-term state of that system can display strong path dependencies. For example, policy researcher David Keith notes that whether the energy system evolves into a network of large centralized power stations that distribute electricity over a super grid, or toward many small distributed combined heat and power generators that use piped-in gaseous fuel, could well depend on whether stringent carbon emission constraints precede or follow a substantial rise in the price of natural gas.

The potential for social and economic nonlinearities to cause single technologies to assume dominant roles in succession, the potential for seemingly unrelated developments to profoundly shape the basic structure of the energy system, and the likely path dependency of the future evolution of that system, are just three of the many factors that make long-range energy model prediction problematic. However, constructed and used appropriately, models can be a powerful tool to help identify and explore the factors that could give rise to a range of quite different futures, and to examine the robustness of proposed alternative policies across that range of possible futures, even if it is difficult or impossible to reasonably assess the probability that any particular future will come to pass. Policymakers and modelers would be well advised to reduce their emphasis on long-term prediction in favor of such uses.

M. GRANGER MORGAN

Head, Department of Engineering and Public Policy

Carnegie Mellon University

Pittsburgh, Pennsylvania


Climate change caution

I am critical of some of the conclusions of Richard B. Stewart and Jonathan B. Wiener in “Practical Climate Change Policy” (Issues, Winter 2004). The climate regime they propose is not that simple when seen in an international perspective.

The European Union shares the view that the establishment of a trading regime is the right way to proceed in order to deal with the climate change issue, but fair rules for this approach are most important. The way reduction targets in the Kyoto Protocol were assigned to Annex-1 countries was in a sense ad hoc, but they were agreed to after cumbersome negotiations with the overall aim of achieving a 5 percent reduction of greenhouse gas emissions by developed countries between 1990 and 2010. The absence of commitments from developing countries was agreed on because developed countries, with 20 percent of the world population, are emitting twice as much carbon dioxide as developing countries.

The cap-and-trade scheme that is now proposed will face similar difficulties. How to agree on a cap for total emissions, if emissions reductions are to be achieved within a few decades? What path should then be chosen to reach a lower level of total emissions, and how to distribute the emission permits between participating countries initially? Developing countries would be very anxious to secure the possibility of increasing their emissions substantially for some time to come, and as indicated above, their arguments are strong. Developed countries still emit about 58 percent of total emissions, and U.S. per capita emissions are, for example, about 8 and 15 times larger than those of China and India, respectively. Developing countries would also be hesitant to adopt a trading system without some guarantees, because rich industrial countries could afford to buy permits at a price that would be high in their view. We would therefore still be confronted with cumbersome international negotiations if aiming for substantial reductions of global emissions in the near future. Cooperation between the United States and European Union is therefore fundamental.

The European Union has been reluctant to accept credits for terrestrial sinks for carbon dioxide, the prime reasons being that it is not possible to assess the magnitude of such sinks with adequate accuracy because of their great heterogeneity, and because terrestrial storage may be stable only temporarily, particularly since the climate is going to change. It would be difficult to set up a reliable reporting system. Also, because it is not possible to distinguish between sinks created by human efforts and those that might occur anyhow because of enhanced photosynthesis in a more carbon dioxide­rich atmosphere, it would be next to impossible to verify whether the credits claimed for terrestrial sinks would be the result of human efforts.

The issue of creating a regime for reducing the threats of climate change cannot be resolved merely by politics, but a level playing field for the prime actors is of course essential. Leaving this solely to market forces would, however, promote inequity and create further conflicts in the world. Rather, technical development is basic in order to make progress. Yet the development of renewable energy has been slow because cheap oil, gas, and coal still have a considerable competitive advantage.

The recent realization that it might be possible to store carbon dioxide in sedimentary rocks and aquifers then becomes interesting, provided that leakage can be avoided and the environment is not unduly disturbed. Use of this technology might be required in order to avoid an increase of greenhouse gases in the atmosphere beyond what would correspond to a doubling of the preindustrial carbon dioxide concentration, which might well be an unacceptable risk. The higher price for energy that will most likely result from our efforts to reduce greenhouse gases is then the price we will have to pay to secure one major aspect of sustainable development for humankind.

BERT BOLIN

Stockholm, Sweden

Former chairman of the Intergovernmental Panel on Climate Change


Can coal come clean?

“Clean Air and the Politics of Coal” by DeWitt John and Lee Paddock (Issues, Winter 2004) contains a good history of the attempts of the federal government to regulate the adverse effect of coal burning on air quality since the passage of the federal Clean Air Act in 1970 and of the national and regional politics involved in eroding the Environmental Protection Agency’s (EPA’s) New Source Review (NSR) Standard for coal-fired electrical plants. However, the culmination of the article in broad principles to guide future decisionmaking on bringing old coal-fired power plants into compliance with the Clean Air Act could be read as taking the edge off the very real need to solve what may be one of the biggest public health issues of the next decade.

During the past 30 years, federal, state, and local governments, environmental organizations, private industry, local citizens and others have worked diligently to devise regulatory and market mechanisms to improve the quality of the land and water in this country. These efforts have taken place under the federal Clean Water Act, Superfund, and the federal Resource Conservation and Recovery Act, as well as similar state laws requiring cleanup of polluted water, proper management of hazardous wastes, and cleanup of old industrial and municipal waste dumps. The public health threat posed by dirty water and hazardous wastes remains significant. But a large body of epidemiological evidence indicates that the most prevalent and serious environmental health impacts are likely from air pollutants. Few of us are directly exposed to soils at hazardous waste sites, but we all inhale pollutants in ambient air. And while we must stay vigilant in keeping harmful levels of chemical pollutants out of drinking water, the quantitative health risks from water are typically much lower than those from air, in part because we remove contaminants from municipal drinking water supplies, unlike pollutants in ambient air in our cities and industrial regions.

Our children and elderly parents experience significant health problems attributable to pollutants released into the air from coal-fired plants. The health impacts of these pollutants include premature death, hospitalization, and illness from asthma, chronic bronchitis, pneumonia, other respiratory diseases, and cardiovascular disease. John and Paddock cite EPA estimates that a 70 percent reduction of pollutant emissions from old power plants could prevent 14,000 premature deaths annually. Other reliable estimates of the cumulative public health impact are of a similar magnitude: A recent study by the Maryland-based consulting firm Abt Associates estimated that a 75 percent reduction of pollutant emissions from power plants would save 18,700 lives each year.

We need to act now to reduce this risk. The authors’ principle of strict enforcement of law is a commonsense policy prerequisite to addressing the serious health risks posed by these sources. The question remains, however: What rules regarding old plants will be strictly enforced? During the late 1990s, the EPA was on a course of vigorously pursuing enforcement actions against public utilities operating grandfathered power plants. The EPA had found that these utilities were violating the Clean Air Act under the pre-2003 NSR rules, and its actions resulted in several settlements that involved substantial cleanup at these facilities. But with the EPA’s new weakened NSR rules, there is substantial question about whether strict enforcement will lead to cleaner power plants at all. The principle of strict enforcement needs to be tied to principles of full disclosure of power plant modifications and rules for cleanup of old power plants that will truly protect public health.

The authors are correct in noting that federal lawmakers have before them a very complex decision. Significant investments in the utility industry and coal industries are at stake. But the stakes are higher for public health and the environment. And we need to address this problem now. First, we need to make sure that the public understands what the consequences of coal burning are on human health and the environment, and how the lives of their children will be markedly different because of these effects in their lifetimes. Second, the public needs to make it clear to Congress that it needs to act now to address the clean air/clean coal conundrum in a way that responsibly protects public health. We cannot wait another 25 years to reduce emissions from old coal-fired power plants. There is too much at stake.

MARTHA BRAND

Executive Director

Minnesota Center for Environmental Advocacy

St. Paul, Minnesota

Forests Face New Threat: Global Market Changes

For the past 100 years, U.S. forest policy has been guided by the assumption that the United States faced an ever-increasing scarcity of timber. Indeed, at times during the 20th century, there were fears of an impending timber famine. Policymakers responded accordingly, taking actions such as subsidizing reforestation, creating the national forests, and protecting forests from fires.

Now, however, the world has been turned upside down. The United States today finds itself in a world of timber surpluses and increasing competition. As a result, this country faces a declining role in the global wood products industry. In the Pacific Northwest, for example, questions about the competitiveness of the region’s prized Douglas fir products have shaken the industry to its core; a situation that was unthinkable a few years ago.

At the heart of the matter, the globalization of capital markets is dramatically altering the socioeconomic context for growing and manufacturing wood-based products from timberlands. One key change has been the adoption of agronomic approaches to wood production. Particularly important has been the expanded use of intensively cultivated, short-rotation tree plantations in temperate and subtropical regions of the Southern Hemisphere. These “fiber farms” have proved to be extraordinarily productive. Capital investments by many North American wood products corporations have shifted to that hemisphere in response to such productivity, as well as to other competitive advantages that exist there. At the same time, these companies have been selling huge tracts of land in the United States. In the future, imports will likely supply a significant share of U.S. consumption.

A stabilization or contraction of U.S. timber production might seem a boon to the environment. After all, timber-cutting practices have often had detrimental effects. In our view, however, the loss of timber production may be too much of a good thing, for two reasons. First, most of the forested land in the United States is privately owned; yet private forests provide large public benefits, including watershed protection and wildlife habitat. How will we maintain these forests when the owners can no longer make money from selling wood? Won’t they be increasingly tempted to sell their land to housing developers? Second, various human activities have radically altered the structure and functions of forests, to the point where it’s inconceivable that Mother Nature alone can restore them to their desired conditions. For the past century, proceeds from logging on public lands have been used to help in restoring natural resources. How in the future will we obtain the money needed to carry out essential stewardship of public forests, including restoration efforts? In light of the new economic and social circumstances, we believe that it is time for a major overhaul of our policies regarding private and public forests.

The impact of globalization

The shift of wood production out of North America should not come as a surprise. In a capitalist society, capital flows to those who can extract the most value from it, measured in the long run by achieving an acceptable rate of return on investment. In today’s globalized marketplace, the competition for capital occurs on a worldwide basis. With the dramatic reduction in impediments to capital flows, corporations have more freedom to seek out locations where factors of production provide the best return for stockholders and then move appropriate operations to those locations. In addition, we would expect this capital to flow where there appear to be the fewest impediments to its effective use–where mill siting is easy and corporate decisions are not haunted by the potential for further regulation.

Publicly held wood products corporations are subject to these same competitive pressures and opportunities. Long-term decisions by these corporations are driven by return on investment, measured primarily by discounted present net value. Corporate forestry has tended to follow the plantation model because it offers the prospect of a large return in a relatively short period. Today, with competition at the global level, U.S. wood products corporations are both forced and free to find the most productive forest environments, with the ultimate measure being the lowest per-unit cost of delivered product.

This rationalization of the corporate wood products industry in North America accelerated with concentration of the industry during the 1980s. This period saw numerous hostile takeovers of corporations that had significant forest assets that could be liquidated. A depressed market for wood products in the early 1980s also contributed substantially to the process of industry consolidation. Plants were modernized and the work force was reduced.

Subsequently, trade agreements made the world the playing field and created new opportunities to take advantage of the extraordinary productivity of fiber farms in the Southern Hemisphere. These fiber farms typically are planted with non-native tree species, such as Monterey pine, radiata pine, or Douglas fir from North America, or several species of eucalyptus from Australia. Plantations generally are not located in the Third World or in tropical regions. Instead, they are found primarily in temperate countries with well-developed social structures and environmental awareness, such as New Zealand, Australia, and Chile, and in subtropical regions such as southeastern Brazil. Many millions of acres of fiber farms have been established, the majority of which are on abandoned agricultural and grazing lands. In general, they demonstrate high levels of wood fiber production on short rotations–productivity that can match or exceed the most productive of North American forestlands, including plantations of pines in the Southeast and of Douglas fir on the most productive sites in the Pacific Northwest. When biological productivity is factored in with lower labor and other social costs, few plantations in North America can compete with those in the Southern Hemisphere on a per-unit cost of production.

As a consequence, publicly traded wood products corporations are moving out of less productive regions and making large investments in new fiber farms below the equator. In the United States, the divestment of corporate timberlands began in the Northeast and can now be seen in the Southeast and the Pacific Northwest. For example, during the past two years the Weyerhaeuser Company sold 250,000 acres of timberland in the Pacific Northwest, 174,000 acres in Tennessee, and 170,000 acres in the Carolinas. At the same time, the company has begun a $1 billion investment in Uruguay to create 321,000 acres of pine and eucalyptus plantations and the plants to process this wood.

Lagging policies

Clearly, the growth of fiber farms is fundamentally changing the economics of wood production in the United States. But U.S. forest policy has not kept pace. When the basic policies were developed in the early 1900s, they made good sense. The nation’s forests were being logged and burned with little thought of reforestation. Thus, the government put in motion policies to increase the wood supply, including subsidizing reforestation, establishing cooperative fire protection, and reserving a portion of the forests to be managed under sustained yield. Even with these public policies, the hypothesis of increasing wood scarcity was confirmed by the rising real price of wood until recently. Economic projections and federal policies assumed that the real price of wood would continue to increase for the foreseeable future.

The nation now faces a very different wood products market. U.S. wood consumption is projected to significantly exceed production for the next 50 years. This circumstance might have been expected to stimulate increased stumpage values, but in fact softwood prices are projected to be fairly stable throughout most of the nation. This is the case even though the national forests have been effectively removed as a major supplier of softwood. U.S. softwood lumber production has been maintained with timber from private forests, especially in the south, and by improved milling efficiency, but there is little incentive for investment in U.S. forestland. The gap between consumption and harvest has been filled increasingly by cheap wood available from other countries. Much of this has been from Canada, and in some ways we have substituted Canadian old-growth wood for our own. For the future, we can expect that wood imports from Oceania will become increasingly important. Although some studies suggest that imports will provide about one-third of softwood supplies, we think they will provide significantly more.

The United States will likely become a minor player in the global production of common wood-based products, including lumber, pulp, and paper.

Compounding the problem, the market premium that generally has been paid for large logs also has largely disappeared. Engineered wood products, which use small pieces of wood glued together, are replacing solid wood in structural uses, such as large beams. This disappearance of the price premium for large logs reduces the benefit from holding timber stands for long periods, thus shortening rotations further and penalizing trees such as Douglas fir that start relatively slowly but sustain growth for long periods.

Moreover, wood that will likely be available from public lands will be of low value, the product of forest restoration and fuel reduction projects, in which younger, small trees are removed from forests in order to help prevent fires. The remaining old growth, which might bring higher prices, is largely reserved.

These conditions, in aggregate, suggest a very different world for timber production, one in which the prize goes to the low-cost producer. In such a world market, the United States is likely to be a marginal producer, with relatively high costs, that can compete most fully in times of high demand. Although some of our forest production will continue to be internationally competitive, much of it will not. And the competition will be ruthless in the free trade environment that is probable in the future. This situation is analogous to that in agriculture, in which U.S. producers are having trouble competing with imported crops on production cost. Of course, uncertainties associated with future energy costs compound absolute conclusions. Sharply rising fossil fuel costs could significantly increase the costs of wood products produced 5,000 to 9,000 miles away from U.S. markets.

In addition, timber production in the United States may become more trouble than it is worth. Many of the multinational corporations are expanding their vertical integration deep into retail markets, and the continuing controversies over sustainable forest management can significantly affect retail sales. Maintaining a marginal, unstable, and controversial component of their operation may be increasingly questioned at corporate headquarters.

Thus, the movement of forestry investment out of North America is likely to continue, as major wood products corporations are divesting themselves of land and processing capacity and investing in the Southern Hemisphere. The United States will likely become a minor player in the global production of common wood-based products, including lumber, pulp, and paper. It is possible, of course, that industrial firms with more of a regional or local focus will emerge or be revitalized and purchase some of the lands and mills of the multinationals, reversing current trends toward consolidation. However, much of the industrial forests around metropolitan centers are likely to be bought by developers to the degree that land use laws will permit conversion to such uses. Nonindustrial private forest landowners also will likely shift more of their forests to other uses.

New challenges

Does this shift really matter? In fact, might not this be a good thing, because the nation now can preserve its forests and consume cheaper wood products produced in fiber farms in other countries? Indeed, many participants in the national and global timber-environment debates believe that this division of the global forestry estate into fiber farms and unmanaged natural forests provides the solution to many forestry conflicts. Reducing management intensification on forest industry lands in the United States could certainly have a positive effect on biodiversity, especially in forests at an early stage of growth. A reduction in timber harvest rates also could lessen a variety of environmental effects. Finally, the reduction in timber value makes it less costly to leave trees standing in the woods to provide for wildlife habitat and other purposes.

Unfortunately, the potential loss of the wood products industry accentuates several major challenges. One challenge is retaining private forests in forest cover. Private forests provide large public benefits in the form of various goods and services, including watershed protection, wildlife habitat, and open space. Many private forest owners allow public access to their lands. Thus, it is in society’s interest to retain the majority of these lands in forest cover. For many societal objectives, even periodic clear-cutting is preferable to conversion to subdivisions or other nonforest uses.

In the United States, approximately 350 million acres, or 70 percent of all timberlands, are privately owned. Roughly 290 million of those acres are in small and medium-sized tracts belonging to 9.9 million nonindustrial private forest owners. Many of these owners maintain their forests with the proceeds derived from selling trees to wood products corporations. But with a decline of major markets for wood products, what incentives will these private owners have to retain or manage their lands as forest? Where timberlands are located in expanding metropolitan regions or in remote locations suitable for second homes, subdivision is a profitable possibility.

Indeed, major losses of forests to other uses have occurred already. Between 1982 and 1997, 14 states lost more than 2 percent of their forests. Losing the highest percentages were Nevada (16.7 percent), Massachusetts (10.6 percent), New Jersey (10.6 percent), and Colorado (8.4 percent). North Carolina lost the most in total forest area, more than 1 million acres. The nation can expect to lose as much as 25 million more acres in the next 50 years.

These issues are comparable to concerns about the loss of private rangelands in the West and their conversion to subdivisions. Many ranchers have small holdings, often centered on water sources and surrounded by public lands on which they have grazing permits. As the economic viability of these operations declines, the ranchers face the prospect of selling their land to developers. Yet in the Southwest and along the Front Range of Colorado, it is increasingly clear that ranching is preferable environmentally to subdivisions.

A second challenge caused by a shrinking wood products industry comes in maintaining stewardship of forests on public lands. In the past, revenues from the harvest of old growth forest, perhaps ironically, provided the money and the political support for management and restoration activities on federal lands. Without that revenue, national forest budgets have plummeted, with the exception of funds for firefighting. Recent concerns about wildfires may provide some help, but the focus remains primarily on protecting people and property rather than on restoring and managing forests.

All forests require periodic stewardship, including regular monitoring of their condition and protection from undesirable influences. Many people appear to believe that as the threat of timber harvest disappears, public forests can be left to take care of themselves. But this would be a mistake. Too many of the natural fundamentals have changed in public forests during the past century to leave management to nature. Climatic and other environmental variables such as air quality have been modified and are continuing to undergo significant change. Millions of acres of simplified forests and streams exist that need to be restored to fuller ecological function; for example, as spawning habitat for fish. In addition, virulent exotic pests and pathogens introduced during the past century have had devastating effects on many tree species and forests. The pace of these introductions has accelerated with increased global commerce.

Unsustainably high accumulations of fire fuels on many Western lands exemplify the need for active management to restore and maintain functional forests. These adverse conditions were created during the past century by activities such as fire suppression, grazing, logging, and the creation of dense stands of trees by planting. The effects of such activities on fuel accumulations and fire behavior have been greatest on the millions of acres of pine and mixed-conifer forests that evolved under a regime of regularly occurring wildfires of low or moderate intensity. Major programs are now needed to restore fuel loads to characteristic levels, and maintaining them at appropriate levels will require active management in perpetuity, using tools such as prescribed burning and periodic fuel removal.

Plans for response

A rational societal response to these challenges would be to adopt policies that will help maintain forest cover in private forests and restore desired conditions and sustain essential stewardship in public forests. (One bit of good news in this regard is that the economic expectations of many private forest owners often differ from those of publicly held corporations, making them more flexible about acceptable rates of return and definitions of capital.) Society has a variety of tools at hand to help accomplish these goals. These tools fall into six major categories, based on their purpose:

Reducing the costs of managing private forests. During the late 19th and early 20th centuries, abandonment of land by private forest owners was a serious problem. Property taxes were imposed annually on forestland, much like on residential property. But once the timber was cut, the owners had no revenues for many decades to pay the taxes. Consequently, owners abandoned their land in order to escape the property tax. Most states recognized this problem and adopted a yield tax, paid at harvest, as the primary property tax on forests. In addition, a variety of federal income tax provisions, such as lower tax rates for timber sales, recognized society’s interest in maintaining productive forestlands. Such policies arose from the desire for increased supplies of wood.

The nation now needs to revisit these policies, but with a new goal: the maintenance of forests across the landscape. Among other actions, this will require reconsidering federal income and estate taxes. Allowing income averaging in income tax calculations and allowing capitalization of more of the expenses associated with timber production could have major impacts on production costs. Continuing to ease estate taxes that allow families to pass forests intact from generation to generation could slow the conversion to other uses.

Changes also are needed in state regulations that currently are focused on increasing future timber supplies. Most states with extensive forestland, such Oregon and Washington, have significant reforestation requirements after harvest to ensure a new commercial crop of trees. But these regulations impose a financial burden on landowners, and in some cases they actually reduce biodiversity. These requirements could be adjusted to lower the cost of production and produce greater ecological benefits.

Finally, the regulatory environment should be stabilized. For many forest owners, expanded environmental regulations, based on the strictures of the Clean Air Act, the Endangered Species Act, and other federal and state laws, often seem to lurk around every tree. Although many of these regulations are needed, achieving a stable policy environment for investment must be a long-term goal. For example, landowners need assurance that improving habitat conditions in their forests and streams, which may attract endangered species, will not result in increased regulatory constraint on management alternatives.

Even adjustments in federal land management policies can have a beneficial (or negative) effect on the regulatory environment. For example, adoption of the conservation-oriented Northwest Forest Plan for federal lands in the Pacific Northwest had the unplanned effect of creating a more stable regulatory environment for managers of private and state trust lands. This plan resulted in the federal government taking the major responsibility for forest species conservation in the Pacific Northwest by making conservation the primary objective on nearly 80 percent of the federal forestland base. The consequence was that the conservation burden was dramatically reduced on private and state trust forestlands, as a result of mutually agreed-on Habitat Conservation Plans for millions of acres of those lands.

Creating markets for important forest goods and services. Forestlands provide numerous societal services and goods that have not been fully valued, in part because markets are lacking. As potential returns from wood production decline, economic recognition of other forest values, including the creation of markets, could provide incentives for forest stewardship. Two of these alternative values are watershed protection and carbon sequestration.

Watershed protection is arguably the most important service and water the most important good provided by forest ecosystems. Society largely takes the availability of water for granted. Yet the maintenance of a well-regulated high-quality supply of water is and will remain the most important function of forests in the 21st century. Active restoration and management programs will be needed to protect or restore streams and rivers that pass through forests. This effort will require activities such as reducing the impacts of existing road systems and restoring structural complexity to simplified water channels. The problem is that mechanisms generally do not exist that fully recognize the value of water as a good and that compensate forestland owners for their stewardship of this resource. Creating an appropriate market for watershed protection would seem to be a critical step in that stewardship.

The maintenance of a well-regulated high-quality water supply will be the most important function of forests in the 21st century.

Although the issue of water rights is certainly a legal labyrinth in North America, new approaches to water valuation will be critical in moving toward recognition of watershed values in forests and appropriate stewardship. Possible approaches range from treating water as a fully tradable commodity to using market incentives to increase the efficiency of water use and allocation. Although water markets are not new, the concept has not been widely applied in the United States.

There are complexities in applying market principles, however, as indicated by the experiences of other countries. For example, in 1981 Chile adopted a water law based on the free-market approach and substantially reducing the regulatory role of government. Although the law has had positive impacts on investment and flexibility in water allocation, major difficulties were encountered in dealing with issues such as social equity, environmental concerns, and integrated watershed planning–the types of issues typically addressed by governmental institutions.

Forests also can play a substantial role in combating global warming. The major greenhouse gas, carbon dioxide (CO2), has increased in the atmosphere largely because of the burning of fossil fuels. As forests grow, they take up large amounts of CO2 from the atmosphere and sequester it in their wood, often for centuries. Many forests, such as those in the Pacific Northwest, have a very large capacity to store additional CO2.

Carbon markets could provide incentives for forest managers to manage lands in ways that would either remove (sequester) additional CO2 or prevent its release in the atmosphere. Markets could stimulate such practices as creating forests on marginal agricultural lands; lengthening the rotation period (the time between harvests); altering harvesting techniques to leave additional carbon after harvest; and permanently reserving existing forests from harvest, a particularly effective approach in the short term. Until public policy requires control of carbon emissions, however, carbon sequestration in forests will have little market value despite the potential for forests to make a significant contribution to the control of greenhouse gases.

Wildlife is another forest good that can be monetized. Private forest owners can market game species by selling hunting rights. This approach is already widespread in the Southeast, where there has been a tradition of leasing private forests to hunting clubs. Compensating forest owners for providing wildlife habitat could be extended to nongame species and other biota as well by providing public incentives, including direct payments.

Purchasing land and conservation easements using public funds. The most direct approach to maintaining public values on private forest landscapes is by purchasing the land. Such an approach led to the creation of the national forests in the eastern United States in the 1920s and 1930s as a way to stem abuse of private lands in the headwaters of navigable streams. However, there are significant issues surrounding the purchase of private forests. Acquisition costs can be high because of the amount of land involved and the high value of competing uses. Opposition to public acquisition of private lands also can be formidable, as many people resist such a practice out of principle or fear of a negative impact on the property tax base.

Conservation easements are an alternative approach to ensuring that critical forest values are maintained, presumably in perpetuity. But such easements can carry substantial price tags that are as much as 90 percent of complete purchase. In addition, conservation easements are currently undergoing substantial public scrutiny as to their actual value to society and their long-term viability, particularly with regard to adequate trust oversight, issues that were brought into focus by recent investigations into practices used by the Nature Conservancy.

Acquisition and management of private forests by nonprofit organizations is a third approach. For example, in 2002 the nonprofit Evergreen Foundation proposed to acquire approximately 100,000 acres of forests near Seattle from Weyerhaeuser by issuing tax-free bonds; a portion of the acquired forest would subsequently be managed to pay the interest and, ultimately, the principal on the bonds. The foundation failed to obtain approval for issuing the bonds before the company’s purchase deadline, but the potential value of this approach is apparent.

These approaches and others might be combined in regional efforts to maintain functional forest landscapes where large divestitures of corporate forests are occurring. Approaches adopted in the Northeast, which first experienced this phenomenon, provide useful guides. For example, a project to conserve public forest values in northern New Hampshire and adjacent Maine combined purchases of lands and conservation easements by nonprofit organizations and governments with agreements by private landowners to practice ecological forestry.

Using zoning regulations to control land use. Society traditionally has used a variety of mechanisms to influence land use practices on private lands, with zoning, which allows certain uses and prohibits others, being high on the list. For example, Oregon passed land use laws in the 1970s that had the preservation of prime farm and forestland as a primary goal. These laws have been overwhelmingly successful in slowing the development of forest and farmland and in concentrating commercial and residential development in designated areas of the state. Over time, the state has allowed urban growth boundaries and rural residential areas to systematically expand to accommodate population increase. This approach does have its detractors, though, as the zoning often significantly limits the potential economic value of the property. To help adjust for that, land zoned for farms or forests is taxed at a lower rate. Still, the debate over the fairness of this approach continues.

Creating or maintaining a viable domestic forest industry. It will, of course, take time to work through the various public policy options, and therefore it will be important to maintain an industrial wood products infrastructure and skilled workforce. As is already evident in the Intermountain West, it is difficult to regenerate a capacity once it is gone. For example, there is no longer significant plant capacity to use small, low-quality wood from fuel restoration treatments for either wood products or biomass energy production in Arizona and New Mexico, where these treatments are badly needed, because plants have been closed or converted to other raw materials. Private capital is unlikely to recreate this capacity, given current market economics and uncertainties regarding a dependable wood supply.

One key to maintaining an indigenous forest industry is finding niches in the global marketplace. Focusing on special high-quality wood products is one approach, perhaps with brand differentiation. For example, mature wood from Douglas firs has special strength properties, and this trait might be exploited to create a variety of specialty products, which would then be promoted accordingly. There is, of course, the risk that niche markets will disappear, as many such markets have, with the advent of engineered wood products. More generally, attention might focus on the production of woods of species and qualities that cannot readily be grown on fiber farms. High-quality hardwood timber for use in quality furniture and cabinetry is an important example.

Stewardship of forests throughout much of the United States, including fire restoration programs in the West, will require the harvest of smaller trees in order to better protect large old trees. However, because of the federal government’s spotty track record in providing a steady supply of such mature trees, entrepreneurs will be reluctant to invest in state-of-the-art milling and biomass processing facilities geared to their use. Several steps can be taken to encourage necessary private investments in modern plants. For example, the government can absorb a significant amount of the risk of establishing new plants or revitalizing old ones by making technology grants and interest-free loans, and it can commit to providing a stable wood supply. Promises of a steady long-term supply of timber were routinely used to lure industry to the West during the 20th century. Both approaches have difficulties that must be overcome. Many people see the forest industry as part of the problem, not as part of the solution. Thus, the idea that the nation needs the industry to address environmental problems is hard for them to accept. The idea that society actually should subsidize this industry to rebuild may prove even more bothersome. Still, both approaches are critical to forest stewardship, particularly in the interior West.

Increasing local community involvement in the stewardship of public lands. In many respects, delegation of authority is only logical. Who better to manage lands than the people who are most directly affected by their condition?

There are some excellent examples of what can be achieved by local stewardship. Municipal watersheds, which supply domestic water supplies, may be particularly illustrative of the potential of this approach. The city of Seattle has adopted a plan for its Cedar River watershed that emphasizes ecological restoration of both the forests (many of which were clear-cut in previous decades) and the stream systems and fisheries. Many other municipalities, from small towns to large cities, depend on watersheds that are partially or wholly owned by state and federal governments; allowing these municipalities to accept stewardship responsibilities for those lands seems appropriate. This may also include significant obligations to pay some or all of the stewardship costs, given the local benefits.

Restoring and maintaining appropriate amounts of potential fire fuels present in publicly owned forests is another example where at least partial delegation to local governments and organizations may be appropriate. Recent congressional legislation provides a start on this approach on providing communities with more of a role in stewardship efforts. However, these are typically limited to specific types of activities, such as the focus on fuel treatments in the case of the Healthy Forest Act. It should be noted, however, that some environmental organizations have worked hard in years past to change the power centers of forest policy from the local to the national level, and hence they may be likely to oppose assigning such authority to local bodies. Thus, much proof of concept will be necessary before the approach is broadly accepted. The fact that the nation is entering an era in which major economic incentives for timber harvest no longer exist may make the transition easier.

Beyond boundaries

How useful will these policies be as the United States shifts from timber scarcity to timber abundance? We do not claim to have all of the answers or to see the future with perfect clarity. We do believe, however, that these suggested actions will be steps in the right direction, and we hope that they will pave the way to further discussion and action. The new world now emerging will certainly take some getting used to.

Nor is North America alone in facing the fundamental changes occurring in the economics of forest management. Much of Western Europe faces similar dilemmas. France and Germany are puzzling over how to maintain their traditional decentralized forest management activities as timber revenues decline. These changes not only affect forest conservation but undermine rural economies and communities that have existed for centuries.

Thus, a new day has dawned for forest management in the United States and worldwide, driven by the realities of the global marketplace. The implications will dominate forest management and forest policy for much of this century, and the vitality of forests will hinge on what actions are taken and how soon reforms begin.

From the Hill – Summer 2004

Most R&D agencies prepare for budget cuts in fiscal year 2006

The Bush administration’s plan to reduce the federal deficit in half over the next five years would mean cuts in funding for most R&D funding agencies, with the steepest cuts coming in fiscal year (FY) 2006 after this year’s elections. Only defense, homeland security, and space would be spared.

A May 19 Office of Management and Budget (OMB) memo, which was leaked to the media in early June, told federal agencies to begin planning for budget cuts in most domestic programs. Although the FY 2005 budget process is proceeding at a snail’s pace in Congress, federal agencies have already started formulating their FY 2006 budget requests.

For FY 2006, the OMB memo means that all R&D funding agencies except the Department of Defense (DOD), the National Aeronautics and Space Administration (NASA), the Department of Energy (DOE), and the Department of Homeland Security (DHS) must plan for cuts to their R&D portfolios. The National Science Foundation (NSF) and the National Institutes of Health (NIH), which would receive small increases in the administration’s proposed FY 2005 budget, would see their gains reversed in FY 2006.

Below are some budget projections for FY 2006 made by the American Association for the Advancement of Science that are based on the OMB directive.

NIH will have to plan for a 2 percent or $600 million cut in FY 2006 after a 2.6 percent increase in FY 2005, leaving the agency with a total budget of $28.2 billion, barely above this year’s $28 billion budget. After factoring in expected inflation, NIH’s FY 2006 budget would be 2 percent below this year’s funding level.

NSF would see its proposed gains in FY 2005 reversed with a 2 percent or $85 million cut for its R&D programs in FY 2006, leaving NSF R&D below this year’s funding level after adjusting for inflation.

Although DOE would see a gain in its R&D budget in FY 2006 because of projected increases for its defense and energy R&D portfolios, the budget for its Office of Science would fall 2.4 percent or $81 million in FY 2006 following a proposed cut in FY 2005. This amounts to a 5.4 percent cut in two years after adjusting for inflation.

Other R&D funding agencies would see further cuts in FY 2006 after proposed cuts in FY 2005. Total cuts over two years after inflation are projected as follows: Department of Agriculture (8.3 percent), the Environmental Protection Agency (11.6 percent), the Department of Commerce’s National Oceanic and Atmospheric Administration (6.2 percent), Commerce’s National Institute of Standards and Technology (13.8 percent), and the Department of the Interior (8.4 percent).

Although R&D budgets in DOD, DHS, and NASA would increase every year during the next five years, some of their programs would not be as fortunate. DOD would cut its support of “S&T” (basic and applied research plus technology development) steeply in FY 2005 and by another percentage point in FY 2006, leaving the DOD S&T portfolio 18 percent smaller after inflation than in FY 2004. And although NASA R&D would increase overall in FY 2006 to ramp up its moon and Mars activities, funding for biological and physical research and earth science would fall steeply in FY 2006.

Congress probes outside earnings of NIH employees

A Congressional committee has begun investigating possible problems stemming from the involvement of employees of the National Institutes of Health (NIH) with outside organizations. The congressional interest was prompted by negative stories in the media, including a December 2003 Los Angeles Times article, which reported that high-level NIH scientists have received large payments from their activities with biotechnology firms and drug companies.

Under current policies, NIH employees are able to accept compensation from awards or private-sector consulting while retaining federal employment. But there is concern that financial ties between federal scientists and outside pharmaceutical or biotech firms may introduce bias into the research process.

In response to the initial outcry over the agency’s ethics rules, NIH Director Elias Zerhouni formed a blue-ribbon panel to analyze conflict-of-interest policies and offer recommendations. The panel’s report, released in early May, recommends improving the transparency of outside activities and banning the use of stock options for compensation. It also would completely prohibit some high-level employees from consulting activities.

In May, the House Energy and Commerce Subcommittee on Oversight and Investigations held two hearings on the issue. At a May 12 hearing, several members indicated that they were unsatisfied with what they considered the perceived leniency of the panel’s recommendations and concerned over possible loopholes in the proposed new guidelines.

At both hearings, former and current NIH administrators defended the practice of outside consulting. They argued that compensation from and interaction with the private sector is crucial to recruit and retain top scientific talent. Norman Augustine, a member of the blue-ribbon panel, said that without outside income, NIH scientists would be underpaid compared to their private-sector colleagues and more likely to leave federal employment. But this argument didn’t satisfy some members of the committee, including Rep. Diana DeGette (D-Col.), who said that she, like other members of Congress, earn much less than private colleagues with the same experience.

Zerhouni and others asserted that public-private collaborations are essential to the advancement of public health and the translation of knowledge into medical practice. Although members of the committee acknowledged the necessity of some level of private-sector interaction, they cited Cooperative Research and Development Agreements, the formal partnering mechanism between the federal government and industry or academia, as a method for external collaborations.

At a second hearing, the committee examined a specific conflict-of-interest case involving two scientists, one from the National Cancer Institute (NCI) of the NIH and the other from the Food and Drug Administration (FDA). The NCI and the FDA had worked in formal collaboration with a private bioscience firm to develop and commercialize an ovarian cancer screening tool. The two federal scientists, who acted as co-principal investigators for the public-private partnership, were recruited for outside consulting purposes by a competitor of the private partner. The scientists’ requests to consult with the competing firm, which also developed technology to detect biomarker patterns, were approved by the FDA and NCI. After learning that the scientists were involved with its competitor, the private partner brought the issue to the NCI’s attention.

Arguing that they were unaware that the firms would be regarded as competitors, the NCI and FDA scientists told the subcommittee that they had recently ceased their relationship with the competing firm. Members of subcommittee felt that even the appearance of ethical improprieties in this case, and in other cases discussed at the hearing, merits changes to NIH policies. Rep. James Greenwood (R-Penn.) said that when comparable situations occur, damage is done both to the partnership and to public trust of the agency.

Although acknowledging the good intentions of Zerhouni and the panel, subcommittee members at the conclusion of the hearings remained concerned about the conflicts-of-interest regulations. Rep. DeGette suggested that the situation may warrant a “ban on outside compensation,” although that outcome appears to be remote at this time. Subcommittee members said they will continue to press for even greater transparency in private-sector activities and to require a larger number of employees to report financial disclosure than the blue-ribbon panel advocated.

Subsequent to the NIH hearings, the FDA announced plans for a comprehensive assessment of its policies regarding employee involvement with outside activities. Although the FDA has more stringent regulations concerning its employees’ interactions with firms in the industries it regulates, the NIH case indicates that the FDA may be in need of a policy review as well.

White House, Congress promote efforts to boost supercomputing

A White House task force released a report in early May concluding that current federal efforts in supercomputing are inadequate. Meanwhile, several bills have been introduced in Congress to bolster U.S. R&D on supercomputing.

In its Federal Plan for High End Computing, the White House High End Computing Revitalization Task Force reports that scientists lack access to the latest, cutting-edge, high-performance computers (HPCs). It asserts that the private sector’s current focus on developing computers for personal and business applications has led to a dearth of investment in scientific computing resources. This has resulted in the creation of a high-cost, low-volume market that provides economic disincentives for industry to devote scarce resources to develop HPCs. Thus, HPCs today, especially those made from commercial off-the-shelf components, lack sufficient computing ability to solve complex scientific challenges, ranging from weapons simulation and satellite data processing to aircraft engineering and atmospheric modeling.

In Congress, Rep. Judy Biggert (R-Ill.) has introduced two bills: the High Performance Computing Revitalization Act of 2004 (H.R. 4218) to coordinate agencies and leverage HPC investments, and the Department of Energy (DOE) High-End Computing Revitalization Act of 2004 (H.R. 4516) to support a project at the Oak Ridge National Laboratory.

The House Committee on Science, which shares the concerns expressed by the White House task force, held a hearing on May 13 to address H.R. 4218. Rick Stevens, director of the National Science Foundation’s Teragrid Project and the Mathematics and Computer Science Division at Argonne National Laboratory–located in Biggert’s district–warned that without additional federal action, the United States will be unable to maintain its leadership status in the future. He stressed that many of the sciences, including materials science, genomics, astrophysics, climate modeling, high-energy physics, and cosmology, rely on access to supercomputers for progress and advancement. Stevens added that U.S. dominance in scientific research is directly proportional to supercomputer investment and availability.

H.R. 4218 would amend the High-Performance Computing Act of 1991 to create an HPC R&D program under the leadership of NSF and DOE. It is aimed at advancing the capacity and capability of HPCs and networks for use by researchers through R&D targeted to systems, networks, and network applications relevant to the public and private sectors. It establishes security standards and practices for systems and mandates an increase in the number of graduates and undergraduates studying software engineering, computer science, computer and network security, applied math, library science, and information and computational sciences.

Although H.R. 4218 has been endorsed by the Bush administration’s top science advisor, Jack Marburger, director of the Office of Science and Technology Policy, Stevens expressed concern that an overemphasis on research instead of deployment would allow firms to comply without actually providing new hardware. Stevens nonetheless voiced support for the overall goals of the bill.

H.R. 4516, a companion to S. 2176 introduced by Senators Jeff Bingaman (D-N.M.) and Lamar Alexander (R-Tenn.), calls for R&D in computing architectures and software development; sustained access to HPCs by the research community; technology transfer to the private sector; and coordination with other federal agencies. The bill allocates $50 million in FY 2005 and increases funding to $60 million in FY 2007.

Finally, the bill would give $25 million to the Oak Ridge National Laboratory to build a supercomputer capable of 50 trillion floating point operations per second (teraflops), at an estimated cost of between $150 million and $200 million. It would be faster than the Earth Simulator, a supercomputer developed by Japan in 2002. The development of the Japanese computer has served as a catalyst for renewing U.S. leadership in supercomputing.


“From the Hill” is prepared by the Center for Science, Technology, and Congress at the American Association for the Advancement of Science (www.aaas.org/spp) in Washington, D.C., and is based on articles from the center’s bulletin Science & Technology in Congress.

A Patent System for the 21st Century

The breakneck pace of innovation across many industries, the explosive developments in particular areas such as biotechnology and software, and the rapidly changing role of universities in the development and ownership of technology create challenges for the U.S. patent system. Fortunately, one of the system’s strengths is its ability to adapt to the evolution of technology, and that strength has been particularly apparent in the past two decades. But to ensure that it continues to operate effectively, we need to evaluate the patent system from a broad perspective and continue to update it to meet changing conditions.

Since 1980, a series of judicial, legislative, and administrative actions have extended patenting to new technologies (biotechnology) and to technologies previously without or subject to other forms of intellectual property protection (software and business methods), encouraged the growth of new players (universities), strengthened the position of patent holders vis-à-vis infringers domestically and internationally, relaxed other restraints on the use of patents (antitrust enforcement), and extended its reach upstream from commercial products to scientific research tools and materials.

As a result, patents are being ever-more-zealously acquired, vigorously asserted, and aggressively enforced. There are many indications that firms in a variety of industries, as well as universities and public institutions, are attaching greater importance to patents and are willing to pay higher costs to acquire, exercise, and defend them. Meanwhile, the costs of acquiring patents, promoting or securing licenses to patented technology, and defending against infringement allegations in court are rising rapidly.

In spite of these diverse developments and the obvious importance of patents to the economy, there has been little broad-based study of the effectiveness of the patent system. But now the National Research Council (NRC) has assembled a committee that includes three corporate R&D managers; a university administrator; an inventor; and experts in biotechnology, bioengineering, chemicals, telecommunications, microelectronics, and software; as well as economists; legal scholars; and practicing attorneys. The committee’s report, A Patent System for the 21st Century, provides a unique and timely perspective on how well the patent system is adapting to evolving conditions and focuses particularly on how patenting practices affect researchers and universities.

Until now, most study of the patent system has been conducted by practitioners. This has helped keep the system running smoothly and adjusting to new developments, but practitioners cannot be expected to assess the overall effect of the system on the economy or its specific effect on activities such as university research and private-sector R&D spending. This study takes a close look at these questions, makes recommendations in a few key areas, and highlights dimensions where more data is needed to inform policy decisions.

High rates of innovation that have continued, especially in the 1990s, since the patent system made its recent changes are evidence that the patent system is working well and does not require fundamental changes. Nevertheless, there is little evidence that the benefits of more and stronger patents extend very far beyond a few manufacturing industries such as pharmaceuticals, chemicals, and medical devices. It is not clear that patents induce additional R&D investment in the service industries and service functions of the manufacturing economy, although their roles in that diverse sector have not been studied systematically. One obvious conclusion of the committee’s study is that we need a much more detailed understanding of how the patent system affects innovation in various economic and technological sectors. But even without additional study, the committee was able to identify areas within the current patent system where there are strains, inconsistencies, and inefficiencies that need to be addressed now.

Signs of stress

The NRC committee’s first task was to identify areas where the patent system is under stress and where it might be in need of reform. In reviewing recent developments in the patent system, the committee discovered a number of areas that deserve special attention because changes in U.S. Patent and Trademark Office (USPTO) policy, the practices of institutions, or the technologies themselves have changed the ground rules. Biotechnology deserves particular scrutiny, because this remains a new domain for the USPTO, but all technologies need attention. In some cases, the committee was not able to find enough information to draw firm conclusions or make specific recommendations, but it discusses these areas to help set the stage for possible policy action in the future. The aspects of patenting that the committee addresses include:

Maintaining consistent patent quality is difficult in fast-moving fields. Over the past decade, the quality of issued patents has come under frequent sharp attack, as it sometimes has in the past. One can always find patents that appear dubious at best: a patent on a computer algorithm for searching a mathematics textbook table to determine the sine or cosine of an angle, a patent for cutting and styling hair using scissors or combs in both hands, a patent on storing music on a server and letting users access it by clicking on a list of the music available. To be fair, some errors are unavoidable in a system that issues more than 160,000 patents annually. Still, some critics have suggested that the standards of patentability–especially the nonobviousness standard–have become too lax as a result of court decisions. Other observers fault the USPTO’s performance in examining patent applications, variously attributing the alleged deterioration to inadequate time for examiners to do their work, lack of access to prior art information, or the diminishing qualifications of the examiners themselves.

Because the claim that quality has deteriorated in a broad and systematic way has not been empirically tested, conclusions must remain tentative. There are nevertheless several reasons to suspect that more issued patents are substandard, particularly in technologies newly subject to patenting. One reason to believe that quality has suffered, even before taking examiner qualifications and experience into account, is that in recent years the number of patent examiners has not kept pace with the increase in workload represented by the escalating number and growing complexity of applications. Second, patent approval rates are higher than in some other major nations’ patent offices. Third, changes in the treatment of genomic and business method applications, introduced as a result of criticisms of the quality of patents being issued, reduced or at least slowed down the number of patent grants in those fields. And fourth, there appears to have been some dilution of the application of the nonobviousness standard in biotechnology and some limitations on its proper application to business methods patent applications. Although quality appears to be more problematic in rapidly moving areas of technology newly subject to patenting and is perhaps corrected over time, the cost of waiting for an evolutionary process to run its course may be too high when new technologies attract the level of investment exhibited by the Internet and biotechnology.

What are the costs of uncertainty surrounding patent validity in areas of emerging technology? First, uncertainty may induce a considerable volume of costly litigation. Second, in the absence of litigation, the holders of dubious patents may be unjustly enriched, and the entry of competitive products and services that would enhance consumer welfare may be deterred. Third, uncertainty about what is patentable in an emerging technology may discourage investment in innovation and product development until the courts clarify the law, or inventors may choose to incur the cost of product development only to abandon the market years later when their technology is deemed to infringe. In sum, one suspects that a timelier and more efficient method of establishing ground rules for patent validity could benefit innovators, followers, and consumers alike.

The patent system needs to continue to accommodate new technologies. The incorporation of emerging technologies is not always seamless and rapid; indeed, it often generates considerable controversy. Moreover, case law recognizes limits to patenting, confining patents to inventions that can be expressed as products or methods and excluding patents on abstract ideas and phenomena of nature. A few committee members are concerned that recent fairly abstract patents cross this indistinct line and have unwisely limited public access to ideas and techniques that are important to basic scientific research. Recent examples include patents on the use of a specific genetic characteristic to infer a specific phenotypic characteristic and the use of specific protein coordinates in a computer program to search for protein complexes.

Congress should pass legislation creating a procedure for third parties to challenge patents after their issuance in a proceeding before administrative patent judges.

Differences among national patent systems continue to result in avoidable costs and delays. In spite of progress in harmonizing the U.S., European, and Japanese patent examination systems, important differences in standards and procedures remain, ensuring search and examination redundancy that imposes high costs on users and hampers market integration. It is estimated to cost as much $750,000 to $1 million to obtain comprehensive worldwide patent protection for an important invention, and that figure is increasing at a rate of 10 percent a year. Important differences include: Only the United States gives preference to the “first to invent” rather than the “first to file”; only the United States requires that a patent application disclose the “best mode” of implementing an invention; U.S. law allows a grace period of one year, during which an applicant can disclose or commercialize an invention before filing for a patent, whereas Japan offers a more limited grace period and Europe provides none.

Some U.S. practices seem to be slowing the dissemination of information. In the United States there are many channels of scientific interaction and technical communication, and the patent system contributes more to the flow of information than does the alternative of maintaining technical advances as trade secrets. There are nonetheless features peculiar to the U.S. patent system that inhibit information dissemination. One is the exclusion of a significant number of U.S. patent applications from publication after 18 months, an international norm since 1994. A second U.S. idiosyncrasy is the legal doctrine of willful infringement, which can require an infringer to pay triple damages if it can be demonstrated that the infringer was aware of the violated patent before the violation. Some observers believe that this deters an inventor from looking at the patents of possible competitors, because knowledge of the patent could later make the inventor subject to triple damages if there is an infringement case. This undermines one of the principal purposes of the patent system: to make others aware of innovations that could help stimulate further innovation.

Access to patented technologies is important in research and in the development of cumulative technologies, where one advance builds on one or several previous advances. Faced with anecdotes and conjectures about restrictions on researchers, particularly in biotechnology, the committee initiated a modest interview-based survey of diverse participants in the field to determine whether patent thickets were emerging or access to foundational discoveries was restricted. The results suggest that intellectual property in biotechnology is being managed relatively successfully. The associated costs are somewhat higher and research can sometimes be slowed, but research is rarely blocked.

Committee members involved with university research noted a recent court ruling that could limit the use of patented procedures in basic research. Universities have traditionally operated under an unwritten assumption that they would not be sued by a for-profit patent holder for violating a patent in the course of precommercial university research, but a 2002 federal court ruling made it clear that a university is not legally protected from patent infringement liability. It remains to be seen whether this will change the behavior of patent holders toward university research.

In some areas, the playing field needs to be leveled so that all intellectual property rights holders enjoy the same benefits while being subject to the same obligations. In 1999, the Supreme Court struck down a law that denied a state’s ability under the Eleventh Amendment to the Constitution to claim immunity against charges of infringing a patent or other intellectual property. Thus, state institutions such as public universities can claim immunity against infringement suits. As a result, a public university that holds a patent could be in the position of asserting its patent rights against an infringer while successfully barring a patent holder from recovering damages for the university’s infringement of a patent. A private university would not be protected from infringement suits, which could conceivably influence decisions on where research is done. It is too soon to know what the effects of the Supreme Court decision will be, but it bears watching.

Toward a better system

The NRC committee supports several steps to ensure the vitality and improve the functioning of the patent system:

Preserve an open-ended, unitary, flexible patent system. The system should remain open to new technologies, and the features that allow somewhat different treatment of different technologies should be preserved without formalizing different standards; for example, in statutes that would be exceedingly difficult to draft appropriately and equally difficult to change if found to be inappropriate. Among the tailoring mechanisms that should be exploited is the USPTO’s development of examination guidelines for new or newly patented technologies. In developing such guidelines, the office should seek advice from a wide variety of sources and maintain a public record of the submissions. The results should then be part of the record of any appeal to a court, so that they can inform judicial decisions.

This information could be of particular value to the Court of Appeals for the Federal Circuit, which is in most instances the final arbiter of patent law. In order to keep this court well informed about relevant legal and economic scholarship, it should encourage the submission of amicus briefs and arrange for temporary exchanges of members with other courts. Appointments to the Federal Circuit should include people familiar with innovation from a variety of perspectives, including management, finance, and economic history, as well as nonpatent areas of law that could have an effect on innovation.

Reinvigorate the nonobviousness standard. The requirement that to qualify for a patent an invention cannot be obvious to a person of ordinary skill in the art should be assiduously observed. In an area such as business methods, where the common general knowledge of practitioners is not fully described in published literature likely to be consulted by patent examiners, another method of determining the state of knowledge needs to be employed. Given that patent applications are examined ex parte between the applicant and the examiner, it would be difficult to bring in other expert opinion at that stage. Nevertheless, the open review procedure described below provides a means of obtaining expert participation if a patent is challenged.

Gene sequence patents present a particular problem because of a Federal Circuit ruling that with this technology, obviousness is not relevant to patentability. This is unwise in its own right and is also inconsistent with patent practice in other countries.

Institute an “Open Review” procedure. Congress should pass legislation creating a procedure for third parties to challenge patents after their issuance in a proceeding before administrative patent judges of the USPTO. The grounds for a challenge could be any of the statutory standards–novelty, utility, nonobviousness, disclosure, or enablement–or the case law proscription on patenting abstract ideas and natural phenomena. The time, cost, and other characteristics of this proceeding should make it an attractive alternative to litigation to resolve questions of patent validity. For example, federal district courts could more productively focus their attention on patent infringement issues if they were able to refer validity questions to an Open Review proceeding, which is described in more detail in a chapter I coauthored in Patents in the Knowledge-Based Economy, the companion volume to the NRC committee report.

Strengthen USPTO resources. To improve its performance, the USPTO needs additional resources to hire and train additional examiners and implement a robust electronic processing capability. Further, the USPTO should create a strong multidisciplinary analytical capability to assess management practices and proposed changes, provide an early warning of new technologies being proposed for patenting, and conduct reliable, consistent, reputable quality reviews that address office-wide as well as individual examiner performance. The current USPTO budget is not adequate to accomplish these objectives, let alone to finance an efficient Open Review system.

The USPTO needs additional resources to hire and train additional examiners and implement a robust electronic processing capability.

Shield some research uses of patented inventions from liability for infringement. In light of the Federal Circuit’s 2002 ruling that even noncommercial scientific research enjoys no protection from patent infringement liability, and in view of the degree to which the research community has relied on such an exemption, there should be limited protection for some research uses of patented inventions. Congress should consider appropriate targeted legislation, but reaching agreement on how this should be done will take time. In the meantime, the Office of Management and Budget and the federal government agencies sponsoring research should consider extending “authorization and consent” to those conducting federally supported research. This action would not limit the rights of the patent holder, but it would shift infringement liability to the government. It would have the additional benefit of putting federally sponsored research in state and private universities on the same legal footing.

Modify or remove the subjective elements of litigation. Among the factors that increase the cost and decrease the predictability of patent infringement litigation are issues unique to U.S. patent jurisprudence that depend on the assessment of a party’s state of mind at the time of the alleged infringement or the time of patent application. These include whether someone “willfully” infringed a patent, whether a patent application included the “best mode” for implementing an invention, and whether a patent attorney engaged in “inequitable conduct” by intentionally failing to disclose all prior art when applying for a patent. Investigating these questions requires time-consuming, expensive, and ultimately subjective pretrial discovery. The committee believes that significantly modifying or eliminating these rules would increase the predictability of patent dispute outcomes without substantially affecting the principles that these aspects of the enforcement system were meant to promote.

Harmonize the U.S., European, and Japanese patent examination systems. The United States, Europe, and Japan should further harmonize patent examination procedures and standards to reduce redundancy in search and examination and eventually achieve mutual recognition of applications granted or denied. The committee recommends that the United States should conform to practice elsewhere by adopting the first-to-file system, dropping the “best mode” requirement, and eliminating the current exceptions to the rule of publication of an application after 18 months. The committee also recommends that other jurisdictions adopt the U.S. practice of a grace period for filing an application. These objectives should be pursued on a trilateral or even bilateral basis if multilateral negotiations do not progress.

In making these recommendations, the NRC committee was mindful that although the patent law is designed to be uniform across all applications, its practical effects vary across technologies, industries, and classes of inventors. There is a tendency in discourse on the patent system to identify problems and solutions to them from the perspective of one field, sector, or class. Although the committee did not attempt to deal with the specifics of every affected field, the diversity of the membership enabled it to consider each of the proposed changes from the perspective of very different sectors. Similarly, it examined very closely the claims made that one class of inventors–usually individuals and very small businesses–would be disadvantaged by some aspect of the patent system. Some of the committee’s recommendations–universal publication of applications, Open Review, and shifting to a first-inventor-to-file system–have in the past been vigorously opposed on those grounds. The committee concluded that the evidence for such claims is wanting, and that its recommendations, on balance, would be as beneficial to small entities as to the economy at large.

Flying Blind on Drug Control Policy

Not knowing about the actual patterns of illicit drug abuse and drug distribution cripples policymaking. As the subtitle of a National Academies report put it four years ago, “What We Don’t Know Keeps Hurting Us.” (Currently, we don’t even know whether the total dollar volume of illicit drug sales is going up or down from one year to the next.) It hurts more when the most cost-effective data collection programs are killed, as happened recently to the Arrestee Drug Abuse Monitoring (ADAM) program of the National Institute of Justice (NIJ).

Determining the actual patterns of illicit drug abuse is difficult because the people chiefly involved aren’t lining up to be interviewed. Heavy users consume the great bulk of illicit drugs, and the vast majority of them are criminally active. About three-quarters of heavy cocaine users are arrested for felonies in the course of any given year. But somehow these criminally active heavy users don’t show up much in the big national surveys.

In the largest and most expensive of our drug data collection efforts, the household-based National Survey on Drug Use and Health, only a tiny proportion of those who report using cocaine frequently report ever having been arrested. Most of the criminally active heavy drug users are somehow missed: Either they’re in jail or homeless and therefore not part of the “household” population, or they’re not home when the interviewer comes, or they refuse to be interviewed. (The total “nonresponse” rate–not-homes plus refusals–for the household survey is about 20 percent. Because the true prevalence of heavy cocaine use in the adult household population is on the order of 1 percent, that’s devastating.) An estimate of total cocaine consumption derived from the household survey would account for only about 10 percent of actual consumption, or about 30 metric tons out of about 300.

So when the National Drug Control Strategy issued by the Office of National Drug Control Policy (often referred to as “the drug czar’s office”) bases its quantitative goals for reducing the size of the drug problem on changes in self-reported drug use in the household survey (or in the Monitoring the Future survey aimed at middle-school and high-school students), it’s mostly aiming at the wrong things: not the number of people with diagnosable substance abuse disorders, or the volume of drugs consumed, or the revenues of the illicit markets, or the crime associated with drug abuse and drug dealing. All of those things are arguably more important than the mere numbers of people who use one or another illicit drug in the course of a month, but none of them is measured by the household survey or the Monitoring the Future survey.

If most drugs are consumed by heavy users and most heavy users are criminally active, then to understand what’s happening on the demand side of the illicit drug business we need to study criminally active heavy users. The broad surveys can provide valuable insight into future drug trends; in particular, the “incidence” measure in the household survey, which picks up the number of first-time users of any given drug, is a useful forecasting tool. But because the vast majority of casual users never become heavy users, and because the rate at which casual users develop addictive disorders is neither constant nor well understood, spending a lot of money figuring out precisely how many once-a-month cocaine users there are isn’t really cost-effective.

The obvious places to look for criminals are the jails and police lockups where they are taken immediately after arrest. So if most of the cocaine and heroin in the country is being used by criminals, why not conduct another survey specifically focused on arrestees?

That was the question that led to the data collection effort first called Drug Use Forecasting and then renamed Arrestee Drug Abuse Monitoring (ADAM). Because ADAM was done in a few concentrated locations, it was able to incorporate what neither of the big surveys has ever had: “ground truth” in the form of drug testing results (more than 90 percent of interviewees agreed to provide urine specimens) as a check on the possible inaccuracy of self-reported data on sensitive questions.

The good news about ADAM was that it was cheap ($8 million per year, or about a fifth of the cost of the household survey) and produced lots of useful information. The bad news is that the program has now been cancelled.

The proximate cause of the cancellation was the budget crunch at the sponsoring agency, the NIJ. The NIJ budget, at about $50 million per year, is about 5 percent of the budget of the National Institute on Drug Abuse (NIDA), which sponsors Monitoring the Future, and about 10 percent of the budget of the Center for Substance Abuse Treatment, which funds the household survey. That’s part of a pattern commented on by Peter Reuter of the University of Maryland: More than 80 percent of the actual public spending on drug abuse control goes for law enforcement, but almost all of the research money is for prevention and treatment. (Private and foundation donors are even less generous sponsors of research into the illicit markets.) Thus, the research effort has very little to say about the effectiveness of most of the money actually spent on drug abuse control.

But the picture is even worse than that, because most of the NIJ budget is earmarked by Congress for “science and technology” projects (mostly developing new equipment for police). When Congress cut the NIJ budget from $60 million in fiscal year (FY) 2003 to $47.5 million in FY 2004, it also reduced the amount available for NIJ’s behavioral sciences research from $20 million to $10 million. Although the $8 million spent on ADAM seems like a pittance compared to the household survey, NIJ clearly couldn’t spend four-fifths of its total crime research budget on a single data collection effort.

However, the NIJ could have continued to fund a smaller effort, involving fewer cities and perhaps annual rather than quarterly sampling efforts. For whatever reason, ADAM seems to have been unpopular at the top management level ever since Sarah Hart replaced Jeremy Travis and his interim successor Julie Samuels as the NIJ director in August 2001.

Unconventional sampling

In addition to its budgetary problems, ADAM had a problem of inadequate scientific respectability because of its unconventional sampling process. ADAM was a sample of events–arrests–rather than a sample of people. The frequency of arrest among a population of heavy drug users varies from time to time in unknown ways and for causes that may be extraneous to the phenomenon of drug abuse. For example, if the police in some city cut back on prostitution enforcement to increase enforcement against bad-check passers, and if the drug use patterns of bad-check passers differ from those of prostitutes, the ADAM numbers in that city might show a drop in the use of a certain drug that didn’t reflect any change in the underlying drug market. So it isn’t possible to make straightforward generalizations from ADAM results to the population of heavy drug users or even to the population of criminally active drug users. Moreover, because arrest practices and the catchment areas of the lockups where ADAM took its samples varied from one jurisdiction to another (Manhattan and Boston, for example, are purely big-city jurisdictions, whereas the central lockup in Indianapolis gets its arrestees from all of Marion County, which is largely suburban), the numbers aren’t strictly comparable from one jurisdiction to another. (If Indianapolis arrestees are less likely to be cocaine-positive than Manhattan arrestees, the difference can’t be taken as an estimate of the difference in drug use between Indianapolis-area criminals and New York­area criminals.) The effects of these variations turned out to be small, but that didn’t entirely placate some of the statistical high priests.

In the real world, these are manageable problems, at least when you consider the fact that the big surveys miss most of what’s actually going on. But in the world of classical statisticians and survey research experts, the absence of a known sampling frame–and consequently of well-defined standard errors of estimate–is a scandal too horrible to contemplate. “ADAM,” sniffed one of them in my presence, “isn’t actually a sample of anything. So it doesn’t actually tell you anything.” (In my mind’s ear, I hear the voice of my Bayesian statistics professor saying, “Nothing? Surely it doesn’t tell you nothing. The question is: What does it tell you? How does your estimate of what you’re interested in change in the presence of the new data?”)

What to the guardians of statistical purity looks like a defense of scientific standards looks to anyone trained in the Bayesian tradition like the man looking for his lost keys under the lamppost, rather than in the dark alley where he lost them, because the light is better under the lamppost. But the argument that a data series isn’t scientifically valid is a powerful one, especially among nonscientists. Even before ADAM was killed, there was pressure to expand the number of cities covered and include some rural areas. The result was to make it much more expensive but not really much more useful. A national probability sample of arrestees would be nice, but it wouldn’t obviously be worth the extra expense.

The two broad national surveys on drug use capture only a tiny proportion of the heavy users who consume the great bulk of illicit drugs.

In an ideal world, of course, we would have a panel of criminally active heavy drug users to interview at regular intervals. But that project is simply not feasible. ADAM was a rough-and-ready substitute and provided useful information about both local and national trends. Outside the classical statistics textbooks, not having a proper sampling frame isn’t really the same thing as not knowing anything.

So ADAM wound up caught in the middle. It was no longer a cheap (and therefore highly cost-effective) quick-and-dirty convenience sample of arrestees in two dozen big cities. It was also not an equal-probability sample from a well-defined sampling frame and therefore not fully respectable scientifically. That made it hard for the drug czar’s office, which very much wanted and still wants some sort of arrestee-testing system, to persuade NIDA to fund it when NIJ couldn’t or wouldn’t maintain it. NIDA regards itself as a health research agency, so anything that looks like law enforcement research naturally takes a back seat there. And to an agency that measures itself by publications in refereed scientific journals rather than information useful to policymakers, ADAM’s sampling issues look large rather than small.

The situation isn’t hopeless. Both the NIJ and the drug czar’s office have expressed their intention to reinstitute some sort of program to measure illicit drug use among arrestees. But no one seems quite sure yet what form that system will take or who will pay for it. I would suggest a hybrid approach: Get quarterly numbers from lockups in the 20 or so biggest drug market cities, and run a separate program to conduct interviews and collect specimens once a year from a much larger and more diverse subset–not necessarily an equal-probability sample–of lockups nationally. That won’t provide a computable standard error of estimate, but it will give us some cheap and highly useful data to use in making and evaluating drug control policies. And at least with that solution, our national drug control effort won’t be flying blind, as it is today.

Recoupment Efforts Threaten Federal Research

In recent years, members of Congress and health advocates have proposed legislative “recoupment measures” that would inappropriately and unfairly place the onus for the pricing and affordability of therapeutic drugs and biologics on academic and other nonprofit research institutions by imposing levies on their royalty income streams. There are at least three compelling reasons why such efforts are problematic and should be opposed. First, these proposals rest on the unfounded assumption of a causal relationship between industry pricing policies and academic institutions’ receipt of royalty streams from successful licensing and commercialization of intellectual property derived from federally funded research. Second, such proposals imply that current federal science policy neither seeks nor achieves a satisfactory “return on investment” in the form of widespread public health benefits, thereby contradicting the most compelling arguments for public funding of scientific research. Third, there is no reason to believe that once implemented, such approaches, now directed at certain “blockbuster” drugs whose origins may be traced, often circuitously, to National Institutes of Health (NIH)-funded research, would not be applied more broadly to other commercial products, both within and outside biomedicine, that yield patent income to research institutions.

Proposals to use federal research funding and patent law to leverage drug pricing go back more than a decade. More recently, in December 2000, Congress attached report language to appropriations legislation funding NIH, noting that: “The conferees have been made aware of the public interest in securing an appropriate return on the NIH investment in basic research. The conferees are also aware of the mounting concern over the cost to patients of therapeutic drugs.” Congress instructed NIH to list all Food and Drug Administration (FDA)-approved drugs with annual sales exceeding $500 million that were developed with NIH support. It further directed NIH to prepare a plan to ensure that “taxpayer’s interests” are protected.

According to NIH’s report, released the following year, 47 FDA-approved therapeutic drugs under patent generated U.S. sales of more than $500 million annually, but only four of them were determined to have been derived directly from patents generated by NIH-funded research. Although NIH-funded research in the past two decades has generated thousands of patents held by universities and other nonprofit institutions, and some of this research might have contributed to the science behind the development of many of these 47 drugs, NIH complied with the language of the directive as required and confined its report to a reaffirmation of the principles supporting technology transfer and a commitment to improve its systems for the (mandated) reporting of new inventions by research awardees. Of course, additional “eligible” drugs might appear at any time, especially in response to the scientific efflorescence stimulated by the doubling of the NIH budget over the past five years.

The conference language reflected a compromise to an amendment introduced by Sen. Ron Wyden (D-Oreg.) and proposals by the late Senator Paul Wellstone and Rep. Bernie Sanders (I-Vt.) relating to “reasonable pricing” of pharmaceuticals. Senator Wyden’s amendment, which was tabled, required, “as a condition of receiving a grant or contract from the National Institutes of Health,” assurance from an academic institution or other entity to transfer to the NIH director a percentage of funds made available from licenses or sales of a broad range of pharmaceuticals. The amendment would have applied to “any pharmaceutical, pharmaceutical compound, or drug delivery mechanism (including biologics and vaccines) approved by the Food and Drug Administration” that used results from a research award and met the $500 million threshold. It was rationalized as a “payback” of the original NIH investment.

These concerns appeared to resonate in Congress. Sen. Tom Harkin (D-Ia.), a stalwart supporter of NIH funding and then chairman of the Senate NIH appropriations subcommittee, noted that the recently approved anti-leukemia drug Gleevec, which was derived in part from federally funded research, will cost patients between $2,000 and $3,000 a month. He concluded that “we need to figure . . . how to get some of the money to come back to NIH.” The administration at least briefly echoed these concerns, when Department of Health and Human Services (DHHS) Secretary Tommy Thompson stated that NIH should recoup a portion of the proceeds from pharmaceuticals it helps develop, and that the money could be used for a prescription drug­purchasing program. Thompson never reiterated this proposal, although Rep. Rahm Emanuel (D-Ill.) and other Democrats unsuccessfully introduced similar recoupment mechanisms during debate on the Medicare prescription drug benefit bill enacted in 2003. Wyden, for his part, has continued to focus on this issue, most recently demanding another NIH report on steps the agency takes to ensure the availability and affordability of products developed with federal resources. NIH’s release of this report is pending.

A General Accounting Office (GAO) report, NIH-Private Sector Partnership in the Development of Taxol, released June 6, 2003, criticized NIH for not ensuring a reasonable commercial price from the manufacturer of the cancer drug Taxol and for failing to negotiate a larger royalty on its sales. According to the report, worldwide sales of Taxol from 1993 to 2002 exceeded $9 billion, of which NIH received royalties at a rate of one-half percent ($35 million). In response to a draft of the report, NIH argued that its negotiating position was greatly limited because it did not hold a patent on Taxol. Moreover, NIH believes it acted to get a promising compound into production and therapeutic application as expeditiously as possible. Senator Wyden, who initiated the GAO study, opined that NIH does not appropriately manage the technologies it helps to spawn: “[T]his report proves that NIH does not understand that as part of its mandate to get drugs to market quickly, it must effectively move to make sure that patients can afford those products. They should also work to get taxpayers a square deal for their investment.”

A long campaign

These proposals and the report are but the latest in a series of congressional reactions to the prices of pharmaceuticals arguably derived in part from publicly funded research. An earlier effort, the reasonable-pricing clause adopted by NIH in 1989, targeted industry partnerships with NIH. The clause, which applied to exclusive licenses on intellectual property arising from these collaborations (and to certain other licensing arrangements for NIH-owned patents), recommended that there be a “reasonable relationship” between the pricing of the licensed product, the level of public investment in the product, and the health and safety needs of the public. The clause was reportedly implemented to address political concern over the cost of the HIV/AIDS drug AZT. Shortly after the appointment of Harold Varmus as NIH director in 1994, the agency reviewed the clause’s impact on its agreements, and Varmus, concluding that the clause “has driven industry away from potentially beneficial collaborations,” rescinded it in 1995. The number of NIH Cooperative Research and Development Agreements signed with industry nearly tripled in the following year (from 32 to 87) and climbed fourfold in subsequent years.

Other attempts to constrain the prices of therapeutics have sought to work within existing legislation, including the provisions of the 1980 Bayh-Dole Act under which funding agencies can “march in” and take title to property rights originally ceded to a contractor or grantee. Most recently, in January 2004, a citizens’ group petitioned DHHS and NIH to exercise march-in rights and compel licensing to third parties of the patented HIV drug Norvir (ritonavir) after Abbott Laboratories abruptly raised the price of the drug more than 400 percent. Many AIDS patients and advocates were understandably irate, given that the drug is most frequently used as a “metabolic booster” with other combination antiretroviral drug therapies, which just so happen to compete with Abbott’s own, newer antiretroviral cocktail. Thus, one effect of the price hike was to force an increase in the costs of the competitive regimens. The petitioners, Essential Inventions, Inc., argued that the price hike is contrary to the requirements under Bayh-Dole to make an invention available to the public on reasonable terms and is also to the detriment of overriding public health interest. A related antitrust complaint has also been filed with the Federal Trade Commission.

The various proposals for recoupment will do nothing to mitigate the long-term problems of drug cost, affordability, and availability.

Although no university research laboratory was involved in the development of Norvir, and although most legal experts do not support the assertions of the petition, the academic community has followed the case closely, concerned that invocation of the federal march-in provision could create a precedent for using Bayh-Dole to regulate drug pricing. It is noteworthy that since the passage of Bayh-Dole, no federal agency has exercised march-in rights, and indeed, NIH had earlier asserted in rejecting another such petition (in the CellPro case) that it was “wary” of exercising its march-in rights to influence the marketplace. The requirement to make federally funded inventions available to the public on reasonable terms is generally interpreted by technology transfer officials and attorneys, including the authors of the Bayh-Dole Act, as preventing a contractor or grantee from “sitting on” an invention or otherwise not being diligent in commercializing a technology. The provision was not intended, according to the bill’s authors, to affect the market price of a resulting technology.

Frustration over the cost of high-profile pharmaceuticals may be aggravated by knowledge that about 75 percent of the papers cited in pharmaceutical patent applications are from publicly funded research. However, this literature typically describes advancements in basic scientific knowledge and enabling technologies, not discoveries directly connected to or embodied in the pharmaceutical compound or method. Drug affordability and availability have become explosive political issues both here and in third-world nations ravaged by AIDS and other treatable infectious diseases. Yet, royalties and other licensing expenses are but a small part of drug development costs and an inconsequential fraction of total sales revenues, and to our knowledge no convincing relationship between a licensing fee or royalty payment and the market price of a pharmaceutical has ever been demonstrated. Moreover, federal intervention to take title to inventions or to issue compulsory licenses, although at first glance perhaps appealing in particular high-profile circumstances, would create uncertainty and anxiety in the business environment for drug development and most likely exert a chilling effect on technology transfer agreements between industry and universities conducting federally funded biomedical research.

Social returns

The majority of the NIH’s sponsored research is performed by academic institutions and is published and broadly disseminated without monetary returns to the institutions or to NIH. From the perspective of the academic community, legislative or administrative proposals such as those described above threaten to confound a fundamental premise of federal science and technology policy: the requirement that awardee institutions not only disseminate useful knowledge but also transfer technology created from federally funded research to the private sector through licensing arrangements or other agreements. It is through these processes of knowledge dissemination and technology transfer that the public receives its rich return on the federal investment in basic science.

The Bayh-Dole Act was a response to concerns of the 1970s that many potential research products were “lying fallow” because of uncertainty about or difficulty in negotiating intellectual property rights with the several sponsoring federal agencies and the lack of sufficient incentive in academic institutions for commercializing their sponsored-research inventions. Since its enactment, patents issued to universities and other nonprofit institutions have risen from fewer than 250 in 1980 to more than 3,600 issued in 2002. The significance of the Bayh-Dole Act was that it obligated federal research awardees to pursue the movement of their research inventions into products and practice, and it removed the federal government as a party to negotiations. The act thereby encouraged commercial entities and venture capitalists to negotiate licensing arrangements with academic institutions without fear of federal intercession.

It is worth recalling that the Bayh-Dole Act’s key objective, as stated in its preamble, is to encourage the dissemination and utilization of technology, not to promote commercial returns on federal research investments either to agencies or academic institutions. Moreover, although the act encourages financial reward to inventors, it requires institutions to reinvest in their research programs whatever licensing income they receive. Although the number of patents issued to universities and other nonprofit institutions has increased dramatically, the great majority do not generate revenues sufficient to recover patenting expenses. Proposals that would recoup the public’s research investments by taxing awardee institutions’ royalty streams would run contrary to the express intentions of Bayh-Dole and represent a major departure from prevailing federal policy. Perhaps most troubling, the entire effort appears to be premised on faulty economics.

NIH and other federally funded scientific research generate a substantial societal return on investment. Indeed, the fundamental rationale of federal science policy since the end of World War II has been to invest tax dollars in basic scientific research to promote societal returns of improved health, strengthened national security, and enhanced economic performance. This has been the central argument advanced in Congress for funding NIH and other science agencies and has been echoed by the advocacy community.

Economic research first demonstrated in the 1950s that more than half of annual growth in U.S. gross national product was attributable to new technology and new knowledge, and later studies have confirmed the relationship of academic research with industrial innovation and prosperity. Estimates of a social rate of return on federally funded research have been reported between 25 percent and 50 percent annually. Improvements to health from medical and other research have been documented in studies of the role of academic research in the development of specific products or therapies. Declining rates of disability and generally improved quality of life indicators among older Americans directly correlate with innovations from biomedical research and have welcome implications for the financial burden of care placed on families and federal programs such as Medicare.

The generative effects of academic research on the electronic and computer science industries, as seen in Silicon Valley, Boston’s Route 128 corridor, and North Carolina’s Research Triangle, are recognized worldwide. Similarly, federal investments in biomedical research spawned the biotechnology industry and are reflected today in the concentration of biotechnology firms near leading biomedical research centers, which is attributable to interactions with leading academic scientists, ideas, and pools of university-trained personnel. Although comparatively little, if any, of the commercial value of these enterprises remunerates universities directly, these industries do provide the foundation for job creation, economic growth, and improved quality of life that are avidly sought and highly prized by local communities, states, and members of Congress. Witness the recent enactment of NIH’s revised Institutional Development Award (IDeA) program, expressly designed to jump-start the ability of “have-not” states to compete more successfully for NIH awards. Legislators advocating this program appear to have little doubt about the rich socioeconomic returns that can accrue from the federal investment in biomedical research.

Other sectors of the economy also owe their existence to academic research inventions, and proposed recoupment schemes could logically be extended beyond biomedicine to all research fields. Indeed, if “return on investment” of federal research funds, or recovery of outlays, is the purported logic behind these recoupment initiatives, there is no reason why these other sources should not become seductive targets for tapping over time. Finally, the various recoupment proposals would tax one of the rare streams of unrestricted university revenues that can be used as seed money for bold research initiatives, to help pay for the increasingly costly infrastructure that is necessary to be competitive in research, and to offset some of the significant cost-sharing that federal research funding presently requires.

The nation relies almost entirely on the private sector to accomplish development, testing, and production of new therapeutics, as well as vaccines, medical devices, and related products. It relies on academic institutions, teaching hospitals, and governmental institutions to perform most publicly funded research. The predisposition of intellectual property rights provided by the Bayh-Dole Act has become an important and demonstrably successful component of the U.S. system of transferring knowledge and technology from academic laboratories to industry. As such, it is part of a system of innovation and development that has served the nation well and is admired and increasingly emulated, by the rest of the world. The various proposals for recoupment will do nothing to mitigate the long-term problems of drug cost, affordability, and availability, but will threaten a robust system of technology transfer that has brought immense benefit to the U.S. public and generated a splendid sustained return on the federal investment in basic science.

What Is Climate Change?

Believe it or not, the Framework Convention on Climate Change (FCCC), focused on international policy, and the Intergovernmental Panel on Climate Change (IPCC), focused on scientific assessments in support of the FCCC, use different definitions of climate change. The two definitions are not compatible, certainly not politically and perhaps not even scientifically. This lack of coherence has contributed to the current international stalemate on climate policy, a stalemate that matters because climate change is real and actions are needed to improve energy policies and to reduce the vulnerability of people and ecosystems to climate effects.

The latest attempt to move climate policy forward was the Ninth Conference of Parties to the FCCC, held December 1 to 12, 2003, in Milan, Italy, which took place amid uncertainty about whether the Kyoto Protocol, negotiated under the FCCC in 1997, would ever come into force. The protocol requires ratification from countries whose 1990 greenhouse gas emissions total 55 percent of the global total. This level will not be reached as long as countries with significant emissions (including the United States and, thus far, Russia) refuse to ratify the protocol. Not surprisingly, climate policy experts have begun to look beyond the Kyoto Protocol to the next stage of international climate policy.

Looking beyond Kyoto, if climate policy is to move past the present stalemate, leaders of the FCCC and IPCC must address their differing definitions of climate change. The FCCC defines climate change as “a change of climate that is attributed directly or indirectly to human activity, that alters the composition of the global atmosphere, and that is in addition to natural climate variability over comparable time periods.” By contrast, the IPCC defines climate change broadly as “any change in climate over time whether due to natural variability or as a result of human activity.” These different definitions have practical implications for decisions about policy responses such as adaptation. They also set the stage for endless politicized debate.

For decades, the options available to deal with climate change have been clear: We can act to mitigate the future effects of climate change by addressing the factors that cause changes in climate, and we can adapt to changes in climate by addressing the factors that make society and the environment vulnerable to the effects of climate. Mitigation policies focus on either controlling the emissions of greenhouse gases or capturing and sequestering those emissions. Adaptation policies focus on taking steps to make social and environmental systems more resilient to the effects of climate. Effective climate policy will necessarily require a combination of mitigation and adaptation policies. However, climate policy has for the past decade reflected a bias against adaptation, in large part due to the differing definitions of climate change.

The bias against adaptation is reflected in the schizophrenic attitude that the IPCC has taken toward the definition of climate change. Its working group on science prefers (and indeed developed) the broad IPCC definition. The working group on economics prefers the FCCC definition; and the working group on impacts, adaptation, and vulnerability uses both definitions. One result of this schizophrenia is an implicit bias against adaptation policies in the IPCC reports, and by extension, in policy discussions. As the limitations of mitigation-only approaches become apparent, policymaking necessarily has turned toward adaptation, but this has generated political tensions.

Under the FCCC definition, “adaptation” refers only to new actions in response to climate changes that are attributed to greenhouse gas emissions. It does not refer to improving adaptation to climate variability or change that are not attributed to greenhouse gas emissions. From the perspective of the FCCC definition, without the increasing greenhouse gases, climate would not change, and the new adaptive measures would therefore be unnecessary. It follows that these new adaptations represent costs that would be unnecessary if climate change could be prevented by mitigation strategies. Under the logic of the FCCC definition of climate change, adaptation represents a cost of climate change, and other benefits of these adaptive measures are not counted.

This odd result may seem like a peculiarity of accounting, but it is exactly how one IPCC report discussed climate policy alternatives, and thus it has practical consequences for how policymakers think about the costs and benefits of alternative courses of action (see IPCC Second Assessment Synthesis of Scientific-Technical Information relevant to interpreting Article 2 of the UN Framework Convention on Climate Change at http://www. unep.ch/ipcc/pub/sarsyn.htm). The IPCC report discusses mitigation policies in terms of both costs and benefits but discusses adaptation policies only in terms of their costs. It is only logical that a policy that offers benefits would be preferred to a policy with only costs.

The bias against adaptation occurs despite the fact that adaptation policies make sense because the world is already committed to some degree of climate change and many communities are ill prepared for any change. Many, if not most, adaptive measures would make sense even if there were no greenhouse gas-related climate change. Under the logic of the FCCC definition of climate change, there is exceedingly little room for efforts to reduce societal or ecological vulnerability to climate variability and changes that are the result of factors other than greenhouse gases. From the broader IPCC perspective on climate change, adaptation policies also have benefits to the extent that they lead to greater resilience of communities and ecosystems to climate change, variability, and particular weather phenomena.

From the restricted perspective of the FCCC, it makes sense to look at adaptation and mitigation as opposing strategies rather than as complements and to recommend adaptive responses only to the extent that proposed mitigation strategies will be unable to prevent changes in climate in the near future. From the perspective of adaptation, the FCCC approach serves as a set of blinders, directing attention away from adaptation measures that make sense under any scenario of future climate. In the face of the obvious limitations of mitigation-only policies, reconciling the different definitions of climate change becomes more important as nations around the world necessarily move toward a greater emphasis on adaptation.

Why it matters

The narrow FCCC definition encourages passionate arguments not only about whether climate change is “natural” or human-caused, but whether observed or projected changes rise to the level of “dangerous interference” in the climate system. The goal of the FCCC is to take actions that prevent “dangerous interference” in the climate system. In the jargon of the climate science community, identification of climate change resulting from greenhouse gas emissions is called “detection and attribution.” Under the FCCC, without detection and attribution, or an expectation of future detection and attribution, of climate changes that result in “dangerous interference” there is no reason to act. In a very real sense, action under the FCCC is necessarily based on claims of scientific certainty, whereas inaction is based on claims of uncertainty.

But climate change is about much more than perceptions of scientific certainty or uncertainty. As Margot Wallström, the European commissioner for the environment, told The Independent in 2001 in response to U.S. President George Bush’s announcement that the United States would pull out of the Kyoto Protocol, climate change “is not a simple environmental issue where you can say it is an issue where the scientists are not unanimous. This is about international relations; this is about economy, about trying to create a level playing field for big businesses throughout the world. You have to understand what is at stake and that is why it is serious.” It seems inescapable that climate policy involves factors well beyond science. If this is indeed true, debates putatively about science are really about other factors.

For example, even as the Bush administration and the Russian government note the economic disruption that would be caused by participating in the Kyoto Protocol, they continue to point to scientific uncertainty as a basis for their decisions, setting the stage for their opponents to argue certainty as the basis for changing course. Justifying the decisions not to participate in the Kyoto Protocol, a senior Russian official explained, “A number of questions have been raised about the link between carbon dioxide and climate change, which do not appear convincing. And clearly it sets very serious brakes on economic growth, which do not look justified.” The Bush administration used a similar logic to explain its March 2001 decision to withdrawal from the Kyoto Protocol: “. . . we must be very careful not to take actions that could harm consumers. This is especially true given the incomplete state of scientific knowledge of the causes of, and solutions to, global climate change.” The FCCC definition of climate change fosters debating climate policy in terms of “science” and thus encourages the mapping of established political interests onto science.

A February 2003 article in The Guardian relates details of the climate policy debate in Russia that show how the present approach fosters the politicization of science. The article reports that several Russian scientists “believe global warming might pep up cold regions and allow more grain and potatoes to be grown, making the country wealthier. They argue that from the Russian perspective nothing needs to be done to stop climate change.” As a result, “To try to counter establishment scientists who believe climate change could be good for Russia, a report on how the country will suffer will be circulated in the coming weeks.” In this context, any scientific result that suggests that Russia might benefit from climate change stands in opposition to Russia’s ratification. Science that shows the opposite supports Russia’s participation. Of this situation, one supporter of the Kyoto Protocol observed, “Russia’s ratification [of the protocol] is vitally important. If she doesn’t go ahead, years of hard-won agreements will be placed in jeopardy, and meanwhile the climate continues to change.” In this manner, science becomes irrevocably politicized, as scientific debate becomes indistinguishable from the political debate.

This helps to explain why all parties in the current climate debate pay so much attention to “certainty” (or perceptions of a lack thereof) in climate science as a justification for or against the Kyoto Protocol. Because it requires detection and attribution of climate change leading to “dangerous interference,” the FCCC definition of climate change focuses attention on the science of climate change as the trigger for action and directs attention away from discussion of energy and climate policies that make sense irrespective of the actual or perceived state of climate science. The longer the present gridlock persists, the more important such “no-regrets” policies will be to efforts to decarbonize the energy system and reduce human and environmental vulnerability to climate.

Under the FCCC definition of climate change, there is precious little room for uncertainty about the climate future; it is either dangerous enough to warrant action or it is not. Claims about the existence (or not) of a scientific consensus become important as surrogates for claims of certainty or uncertainty. This is one reason why climate change is often defined as a risk management challenge, and scientists promise to policymakers the holy grail of reducing uncertainty about the future. In contrast, the IPCC quietly notes that under its definition of climate change, effective action requires “decisionmaking under uncertainty”–a challenge familiar to decisionmakers and research communities outside climate science.

The FCCC definition of climate change shapes not only the politics of climate change but also how research agendas are prioritized and funded. One result of the focus on detection and attribution is that political advocates as well as researchers have paid considerably more attention to increasingly irrelevant aspects of climate science (such as were the 1500s warmer than today?) than to providing decisionmakers with useful knowledge that might help them to improve energy policies and reduce vulnerabilities to climate. It is time for a third way on climate policy.

Reformulating climate policy

The broader IPCC definition of climate change provides less incentive to use science as a cover for competing political perspectives on climate policy. It also sets the stage for consideration of a wide array of mitigation and adaptation policies. Under the broader definition, the IPCC assessments show clearly that the effects of climate change on people and ecosystems are not the result of a linear process in which a change in climate disrupts an otherwise stable society or environment. The real world is much more complex.

First, society and the environment undergo constant and dramatic change as a result of human activities. People build on exposed coastlines, in floodplains, and in deserts. Development, demographics, wealth, policies, and political leadership change over time, sometimes significantly and unexpectedly. These factors and many more contribute to the vulnerability of populations and ecosystems to the impacts of climate-related phenomena. Different levels of vulnerability help to explain, for example, why a tropical cyclone that makes landfall in the United States has profoundly different effects than a similar storm that makes landfall in Central America. There are many reasons why a particular community or ecosystem may experience adverse climate effects under conditions of climate stability. For example, a flood in an unoccupied floodplain may be noteworthy, but a similar flood in a heavily populated floodplain is a disaster. In this example, the development of the floodplain is the “interference” that makes the flood dangerous. Under the FCCC, any such societal change would not be cause for action, even though serious and adverse effects on people and ecosystems may result.

Second, climate changes on all time scales and for many reasons, not all of which are fully understood or quantified. Policy should be robust to an uncertain climate future, regardless of the cause of particular climate changes. Consider abrupt climate change. A 2003 review paper (of which I was a coauthor) in Science on abrupt climate change observes that “such abrupt changes could have natural causes, or could be triggered by humans and be among the ‘dangerous anthropogenic interferences’ referred to in the [FCCC]. Thus, abrupt climate change is relevant to, but broader than, the FCCC and consequently requires a broader scientific and policy foundation.” The IPCC definition provides such a foundation.

An implication of this line of thinking is that the IPCC should consider balancing its efforts to reduce and quantify uncertainty about the causes and consequences of climate change with an increase in its efforts to help develop policy alternatives that are robust irrespective of the specific degree of uncertainty about the future.

Whatever the underlying reasons for the different definitions of climate change, not only does the FCCC create a bias against adaptation, it ignites debates about the degree of certainty that inevitably lead to a politicization of climate change science. The FCCC definition frames climate change as a single linear problem requiring a linear solution: reduction of greenhouse gas emissions under a global regime. Years of experience, science, and policy research on climate suggest that climate change is not a single problem but many interrelated problems, requiring a diversity of complementary mitigation and adaptation policies at local, regional, national, and international levels in the public, private, and nongovernmental sectors.

An approach to climate change more consistent with the realities of science and the needs of decisionmakers would begin with a definition of climate that can accommodate complexity and uncertainty. The IPCC provides such a definition. It is time for scientists and policymakers to reconsider how climate policies might be designed from the perspective of the IPCC.

Is Human Spaceflight Obsolete?

During the past year, there has been a painstaking, and painful, investigation of the tragic loss of the space shuttle Columbia and its seven crew members on February 1, 2003. The investigation focused on technical and managerial failure modes and on remedial measures. The National Aeronautics and Space Administration (NASA) has responded by suspending further flights of its three remaining shuttles for at least two years while it develops the recommended modifications and procedures for improving their safety.

Meanwhile, on January 14, 2004, President Bush proposed a far more costly and far more hazardous program to resume the flight of astronauts to and from the Moon, beginning as soon as 2015, and to push forward with the development of “human missions to Mars and the worlds beyond.” This proposal is now under consideration by congressional committees.

My position is that it is high time for a calm debate on more fundamental questions. Does human spaceflight continue to serve a compelling cultural purpose and/or our national interest? Or does human spaceflight simply have a life of its own, without a realistic objective that is remotely commensurate with its costs? Or, indeed, is human spaceflight now obsolete?

I am among the most durable and passionate participants in the scientific exploration of the solar system, and I am a long-time advocate of the application of space technology to civil and military purposes of direct benefit to life on Earth and to our national security. Also, I am an unqualified admirer of the courageous individuals who undertake perilous missions in space and of the highly competent engineers, scientists, and technicians who make such missions possible.

Human spaceflight spans an epoch of more than forty years, 1961 to 2004, surely a long enough period to permit thoughtful assessment. Few people doubt that the Apollo missions to the Moon as well as the precursory Mercury and Gemini missions not only had a valuable role for the United States in its Cold War with the Soviet Union but also lifted the spirits of humankind. In addition, the returned samples of lunar surface material fueled important scientific discoveries.

But the follow-on space shuttle program has fallen far short of the Apollo program in its appeal to human aspirations. The launching of the Hubble Space Telescope and the subsequent repair and servicing missions by skilled crews are highlights of the shuttle’s service to science. Shuttles have also been used to launch other large scientific spacecraft, even though such launches did not require a human crew on a launching vehicle. Otherwise, the shuttle’s contribution to science has been modest, and its contribution to utilitarian applications of space technology has been insignificant.

Almost all of the space program’s important advances in scientific knowledge have been accomplished by hundreds of robotic spacecraft in orbit about Earth and on missions to the distant planets Mercury, Venus, Mars, Jupiter, Saturn, Uranus, and Neptune. Robotic exploration of the planets and their satellites as well as of comets and asteroids has truly revolutionized our knowledge of the solar system. Observations of the Sun are providing fresh understanding of the physical dynamics of our star, the ultimate sustainer of life on Earth. And the great astronomical observatories are yielding unprecedented contributions to cosmology. All of these advances serve basic human curiosity and an appreciation of our place in the universe. I believe that such undertakings will continue to enjoy public enthusiasm and support. Current evidence for this belief is the widespread interest in the images and inferences from the Hubble Space Telescope, from the new Spitzer Space Telescope, and from the intrepid Mars rovers Spirit and Opportunity.

In our daily lives, we enjoy the pervasive benefits of long-lived robotic spacecraft that provide high-capacity worldwide telecommunications; reconnaissance of Earth’s solid surface and oceans, with far-reaching cultural and environmental implications; much-improved weather and climatic forecasts; improved knowledge about the terrestrial effects of the Sun’s radiations; a revolutionary new global navigational system for all manner of aircraft and many other uses both civil and military; and the science of Earth itself as a sustainable abode of life. These robotic programs, both commercial and governmental, are and will continue to be the hard core of our national commitment to the application of space technology to modern life and to our national security.

The human touch

Nonetheless, advocates of human spaceflight defy reality and struggle to recapture the level of public support that was induced temporarily by the Cold War. The push for Mars exploration began in the early 1950s with lavishly illustrated articles in popular magazines and a detailed engineering study by renowned rocket scientist Werner von Braun. What was missing then, and is still missing today, is a compelling rationale for such an undertaking.

Early in his first term in office, President Nixon directed NASA to develop a space transportation system, a “fleet” of space shuttles, for the transport of passengers and cargo into low Earth orbit and, in due course, for the assembly and servicing of a space station. He declared that these shuttles would “transform the space frontier of the 1970s to familiar territory, easily accessible for human endeavor in the 1980s and 1990s.” Advocates of the shuttle assured the president and the Congress that there would be about one shuttle flight per week and that the cost of delivering payloads into low Earth orbit would be reduced to about $100 per pound. They also promised that the reusable shuttles would totally supplant expendable unmanned launch vehicles for all purposes, civil and military.

Fast forward to 2004. There have been more than 100 successful flights of space shuttles–a noteworthy achievement of aerospace engineering. But at a typical annual rate of five such flights, each flight costs at least $400 million, and the cost of delivering payloads into low Earth orbit remains at or greater than $10,000 per pound–a dramatic failure by a factor of 100 from the original assurances. Meanwhile, the Department of Defense has abandoned the use of shuttles for launching military spacecraft, as have all commercial users of space technology and most of the elements of NASA itself.

In his State of the Union address in January 1984, President Reagan called for the development of an orbiting space station at a cost of $8 billion: “We can follow our dreams to distant stars, living and working in space for peaceful, economic, and scientific gain. . . . A space station will permit quantum leaps in our research in science, communications, in metals, and in lifesaving medicines which could be manufactured only in space.” He continued with remarks on the enormous potential of a space station for commerce in space. A year later he reiterated his enthusiasm for space as the “next frontier” and emphasized “man’s permanent presence in space” and the bright prospects for manufacturing large quantities of new medicines for curing disease and extraordinary crystals for revolutionizing electronics–all in the proposed space station.

Again, fast forward to 2004. The still only partially assembled International Space Station has already cost some $30 billion. If it is actually completed by 2010, after a total lapse of 26 years, the cumulative cost will be at least $80 billion, and the exuberant hopes for its important commercial and scientific achievements will have been all but abandoned.

The visions of the 1970s and 1980s look more like delusions in today’s reality. The promise of a spacefaring world with numerous commercial, military, and scientific activities by human occupants of an orbiting spacecraft is now represented by a total of two persons in space–both in the partially assembled International Space Station–who have barely enough time to manage the station, never mind conduct any significant research. After observing more than 40 years of human spaceflight, I find it difficult to sustain the vision of rapid progress toward a spacefaring civilization. By way of contrast, 612,000,000 revenue-paying passengers boarded commercial aircraft in the year 2002 in the United States alone.

The only surviving motivation for continuing human spaceflight is the ideology of adventure.

In July 1989, the first President Bush announced his strategy for space: First, complete the space station Freedom (later renamed the International Space Station); next, back to the Moon, this time to stay; and then a journey to Mars–all with human crews. The staff at NASA’s Johnson Space Center dutifully undertook technical assessment of this proposal and published its Report on the 90-Day Study of Human Exploration of the Moon and Mars. But neither Congress nor the general public embraced the program, expertly estimated to cost some $400 billion, and it disappeared with scarcely a trace.

Drawing lessons

The foregoing summary of unfulfilled visions by successive presidents provides the basis for my skepticism about the future of the current president’s January 14, 2004, proposal; a kind of echo of his father’s 1989 proposal. Indeed, in 2004, there seems to be a much lower level of public support for such an undertaking than there was 15 years ago.

In a dispassionate comparison of the relative values of human and robotic spaceflight, the only surviving motivation for continuing human spaceflight is the ideology of adventure. But only a tiny number of Earth’s six billion inhabitants are direct participants. For the rest of us, the adventure is vicarious and akin to that of watching a science fiction movie. At the end of the day, I ask myself whether the huge national commitment of technical talent to human spaceflight and the ever-present potential for the loss of precious human life are really justifiable.

In his book Race to the Stratosphere: Manned Scientific Ballooning in America (Springer-Verlag, New York, 1989), David H. De Vorkin describes the glowing expectations for high-altitude piloted balloon flights in the 1930s. But it soon became clear that such endeavors had little scientific merit. At the present time, unmanned high-altitude balloons continue to provide valuable service to science. But piloted ballooning has survived only as an adventurous sport. There is a striking resemblance here to the history of human spaceflight.

Have we now reached the point where human spaceflight is also obsolete? I submit this question for thoughtful consideration. Let us not obfuscate the issue with false analogies to Christopher Columbus, Ferdinand Magellan, and Lewis and Clark, or with visions of establishing a pleasant tourist resort on the planet Mars.

Plugging the Leaks in the Scientific Workforce

In response to the dramatic decline in the number of U.S.-born men pursuing science and engineering degrees during the past 30 years, colleges and universities have accepted an unprecedented number of foreign students and have launched aggressive and effective programs aimed at recruiting and retaining underrepresented women and minorities. Since 1970, the number of bachelor’s and doctoral degrees earned by women and minorities has grown significantly. Despite these efforts, however, the science workforce remains in danger. Although we have become more successful at keeping students in school, we have paid relatively little attention to the success and survival of science graduates–regardless of race or gender–where it really counts: in the work world.

The numbers documenting occupational exit are striking and alarming. Data collected by the National Science Foundation (NSF) in the 1980s (Survey of Natural and Social Scientists and Engineers, 1982-1989) reveal that roughly 8.6 percent of men and 17.4 percent of women left natural science and engineering jobs between 1982 and 1989. A study that follows the careers of men and women who graduated from a large public university between 1965 and 1990 (the basis of my book) further confirms this two-to-one ratio. For science graduates with an average of 12.5 years since the highest degree, 31.5 percent of the women who had started science careers and 15.5 percent of the men were not employed in science at the time of the survey. Estimates from more recent NSF surveys conducted in the 1990s (SESTAT 1993-1999) give similar trends for more recent graduates and further show that, for women at the Ph.D. level, occupational exit rates from the natural sciences and engineering are double the exit rates from the social sciences.

This magnitude of attrition from scientific jobs is especially troubling at a time when, even outside the scientific community, there is a growing awareness that a productive and well-trained scientific workforce is essential to maintaining a technologically sophisticated, competitive, and growing economy. In addition, exit from the scientific workplace is often wasteful and inefficient for the people involved. Individuals who have personally paid for a scientific education often turn to occupations in which their learned skills are not nearly as valuable. The social return on educational investments by the federal government also falls, and institutions that lose scientific employees cannot benefit from their often extensive investments in training.

A better understanding of why people leave scientific careers should ultimately lead to changes in the science education process and in the scientific workplace: modifications that will reduce attrition by both improving the information flow to potential scientific workers and making the scientific workplace more hospitable to career men and women. Such a body of knowledge is also likely to result in workplace enhancements that make science careers more attractive to high-performing educated men and women. Therefore, understanding exit is not only a good defense against attrition but also a valuable component of the strategy to increase the attraction and desirability of science.

The four major reasons for leaving science cited by survey respondents in the study are lack of earnings and employment opportunities, inability to combine family with a scientific career, lack of mentoring, and a mismatch of respondents’ interests and the requirements of a scientific job. A secondary reason involves the high rate of change of scientific knowledge, which leads to many temporary exits becoming permanent as skills deteriorate from lack of use. The factors behind exit separate along gender lines, with men overwhelmingly leaving science in search of higher pay and career growth and women leaving as a result of one of the other three factors, which often contribute to an overall sense of alienation from the field. Policy prescriptions can be organized according to the four factors, but because the factors are interrelated, any one policy action is likely to address multiple causes of exit. Similarly, the policy prescriptions need not be directed toward increasing retention of one gender or the other, because any proposal that enhances the attraction of scientific careers will benefit all participants in science.

Unmet expectations

Unmet salary and career expectations have become an important issue for U.S. scientists in the past 40 years. Early career progress has increasingly stalled, with multiple postdoctoral positions replacing permanent employment and scientific salaries dwarfed by those of other professionals such as doctors and higher-level business people. At the same time, financial success as a goal in itself has become more attractive and, in the 1990s, increasingly attainable in the management professions. Because a large portion of the scientific labor force is employed by government and nonprofit organizations, it is unlikely that salaries, especially at the high levels, will ever be competitive with top managerial salaries. To combat unmet expectations, information about careers must become more comprehensive and up to date. Students choosing scientific majors must know what types of careers they are being prepared for and what salaries, opportunities, and responsibilities they can anticipate.

Ideally, a government agency such as the National Science Board or a professional association should periodically conduct workforce surveys by field, with reports on job options, salaries, and salary growth for scientists with differing levels of education, within differing fields and specialties, and in varying cohorts. The cost and time of such studies can be drastically reduced by using Web technology. Reports on the studies should then be disseminated to all institutions of higher education, so that individual departments can post the results on well-publicized career Web sites for their students. Updating the studies frequently could help keep students well-informed as they progress through their studies.

Once students go into professions with their eyes open, the match between the individual and the career is more likely to be successful. People choosing science careers will be those who value scientific work enough to forego income earned elsewhere. However, even with excellent information, there still will be individuals whose needs and preferences change during their lifetimes, so that they may feel the need to leave science for higher-paying occupations. Improving information collection and flow will not solve the problem of unmet salary expectations completely, but it will go a long way to reduce its severity.

Second and equally important, pay and benefits for postdoctoral positions must be set at acceptable levels. In 2001, the annual salary for a first-year postdoc funded through the National Institutes of Health (NIH) was just over $28,000. Furthermore, outside of NIH there is a lot of variability in pay across fields and institutions. Most postdoctoral scientists are in their late 20s through their mid-30s, a time of life when many individuals are forming families. Low pay can create stress. With the increasing dependence on postdoc positions for early employment opportunities, especially in the biological sciences, low pay is discouraging young scientists from pursuing Ph.D.-level careers. Because many postdoctoral positions are financed by federal grants from NSF, NIH, and the Department of Defense, it is up to these organizations and the science community to educate Congress about the importance of acceptable salaries and to budget for them. The situation has improved slightly with NIH’s commitment to increase annual stipends for entering postdocs to $45,000 over a number of years. As of 2004, annual stipends for first-year postdocs had climbed by $7,600 to $35,700. But this one-time increase will not be enough. A regular review is needed to ensure adequate salaries for the scientific elite.

There is little evidence that imaginative career development and compensation schemes are being used in the scientific workplace.

Well-thought-out and imaginative compensation schemes and career trajectories can be important tools for motivating and retaining existing employees, but there is little evidence that these tools have been wielded in the scientific workplace. Compensation-for-performance schemes are notoriously difficult to design in organizations that are not driven by profits and for employees who work in group settings and whose satisfaction is not tied solely to income. Because scientists find satisfaction in a host of nonmonetary attributes that include prestige, creative freedom, intellectual recognition, and responsibility, such attributes can be used to reward performance. But desired performance must be articulated and measured with care, and rewards must be continually reevaluated for relevance to the employees targeted. Deferred benefits or benefits that grow with seniority are elements of a compensation scheme that would encourage a continuing employment relationship. Because steep career trajectories and greater opportunities are luring scientists into management jobs, scientists seem to want not just more money but also the promise of broadened responsibilities as their tenure with an employer increases. Designing compensation schemes for scientists that reward both good performance and longevity might go a long way toward quieting complaints about the lack of opportunity in scientific careers. Here, private companies with more flexibility in how they spend their resources should take the lead, but the government and nonprofit organizations will have to follow suit in order to stay competitive in the labor market.

Balancing career and family

Family issues arise at different stages of family formation for scientists with different career aspirations. The issue of job location for the married couple is often a stumbling block for Ph.D. scientists who are anticipating an academic career, whereas master’s- and bachelor’s-level scientists, whose jobs are not so specialized, can find jobs in business and government in vibrant urban areas, although they often have trouble combining work and small children because of the more rigid work hours and policies that these jobs entail. Policy to address family issues, therefore, needs to come in a variety of forms.

Dual-career issues are especially thorny for Ph.D. scientists for a number of reasons. First, universities are geographically dispersed. Second, because of large space needs, universities are often built in non-urban areas that do not have vibrant labor markets outside of the university. Third, the early Ph.D. career, which often coincides with marriage and partnership, frequently requires several geographical relocations before a permanent job is secured. Finally, the compromises of the dual-career marriage are disproportionately made by female scientists, who are more likely than their male counterparts to be married to an employed professional and who are likely to be younger and less established than their spouses. Relocating universities is obviously not an option. Still, especially within out-of-the-way university communities, there can be stronger efforts to employ spouses of desired job candidates. Currently, such efforts are most often observed for star candidates, and often the spouse’s job offer is a step down in the career trajectory. Increasing the coverage of such efforts and ensuring that job opportunities for spouses are attractive on their own terms would help ease the problems. However, these programs can only be successful with considerable administrative support, because departments do not usually have the know-how or resources to put together a joint package alone.

We need to reexamine the requirement that Ph.D. scientists make a number of geographical moves in the early stages of their careers as they learn from different scientists in graduate school and postdoctoral appointments. With the increasing ease of communicating and traveling, long-distance collaboration and short-term collaborative research experiences might substitute for numerous geographical relocations. The extent of this substitution will necessarily differ by discipline and is likely to depend on the type of lab work performed and the extent to which researchers are tied physically to their laboratories. Because scientific career paths are well established and deeply entrenched in the scientific culture, change is not going to come easily. Furthermore, change will not come about at all unless it is supported by leaders of the scientific community.

Discipline-based associations, together with the National Academy of Sciences, should commission panels to study alternative ways to teach Ph.D. scientists. In the biological and health sciences, biotech firms seem to be offering alternative career paths already. Many firms will hire an employee after graduate school, providing a postdoctoral position that often leads to a permanent position. These relatively permanent employment opportunities in urban settings create solutions to dual-career problems. Elizabeth Marincola of the American Society for Cell Biology and Frank Solomon of the Massachusetts Institute of Technology have proposed creating staff scientist jobs in university laboratories for scientists who are looking for a more permanent and predictable employment situation Although both of these options would be helpful, there is concern that because the female scientist is more likely than her male counterpart to find a solution to the dual-career marriage, there is a risk of a two-tier work force in which women take the predictable and permanent jobs and men choose the riskier and more prestigious academic route. Leaders in the academic community need to address these issues regarding the academic career path, because past experience has shown that such a gender-based allocation of scientific talent has not been conducive to attracting women into scientific pursuits.

Policies that help to balance the demands of child rearing and a scientific profession are likely to improve the quality of life and the productivity of all scientists who take on both career and family responsibilities. Employing institutions have many options to improve the quality of life of working parents, including but not limited to maternity/paternity leave, increased flexibility of work hours, telecommuting, unpaid personal days for childhood emergencies, a temporary part-time work option, and onsite day care. These reforms are crucial for the success of working parents in all areas of employment, not just in science workplaces, and if media coverage of workplace benefits can be trusted, such reforms have become more commonplace throughout the economy since the early 1990s.

Although Ph.D. scientists in academia often find that the flexibility and autonomy that these policies create help to coordinate child-rearing demands, the flexibility is often an illusion in the early years when, working for tenure, the scientist is putting in 60-to-70-hour weeks. For these scientists, such childcare benefits improve the quality of the individual’s work life but do not diminish the work time necessary to attain tenure. A policy increasingly being considered in academia–giving a parent extra time on the tenure clock for each child born during its duration–allows the working parent the opportunity to make up for some of the research time lost to early childhood parenting and to spread the time in research over a larger span of calendar years.

Together, these policies will help with the day-to-day strains of working parents, and mothers in particular, but they are not enough. Scientists employed at research universities believe that taking extra time off after giving birth and stopping the tenure clock is the kiss of death to one’s career (as the study that is the basis of my book makes clear), and the small number of faculty who do take advantage of this benefit are overwhelmingly female. Scientists do not trust that these activities will be viewed neutrally in the tenure decision. The result is that some women delay childbearing until after the tenure decision, which can be a risky strategy for a woman who wants a family; others take only a minimal maternity leave and return to work to compete as if they were childless; and still others take advantage of the benefit and hope that the gains of the extra time will outweigh any negative perceptions. The distrust felt by these women means that if such benefits are put in place in a college or university, the administration must stand by them and make sure that those who control tenure decisions support them as well. A special committee should be set up to review each tenure case in which the individual has taken advantage of a childcare-related benefit that gave the parent extra time away from teaching or the tenure clock. Ensuring that such activities are not penalized during promotion decisions is paramount to the success of working parents.

More generally, there are two important issues in the realm of work and family. The first is that there is a predominant feeling in the scientific community (and in society generally) that child rearing and careers are in direct conflict and that one has to be compromised for the other. Second, expectations are that women will make this compromise. Because employers assume that women will eventually take time off to care for children, they are likely to give them reduced opportunities early in the career. Once career options are lessened, the decision to put child rearing ahead of work is much easier. Thus, the prophecy becomes self-fulfilling. Claudia Goldin’s finding that only 13 to 17 percent of the college-educated women who graduated in the late 1960s through the 1970s had both a family and a career by age 40 is striking evidence of the fulfillment of these expectations These two issues are difficult to address, because both are based on longstanding cultural norms concerning work, family, and gender roles. The U.S. workplace encourages competition and rewards stars with money, prestige, and opportunity. Technological developments that have recently increased labor productivity have had little impact on the child-rearing function, which offers no acceptable substitute for adult/child personal contact; therefore, child rearing is becoming increasingly expensive to U.S. employers. Because child rearing does take time from work and career development, even for full-time employees, the stars in the U.S. workplace in fields as diverse as business, science, and the arts are not likely to be men or women who spend a lot of time with children and family.

Both issues will become less problematic when men start taking on an increased share of childcare. Once this happens, childcare will be given higher status, and policies to help balance work and family will be given more attention. Furthermore, men and women will be treated much more equally in the labor market. If, in some ideal world, 50 percent of the child-rearing responsibilities were taken on by men, employers would not have differential expectations about the long-term commitment to work of men and women. Women and men would be given the same career opportunities leading up to childbirth and before the child-rearing choices have to be made. Although there may have been some change in the gender allocation of childcare during the past 30 years, data reveal that men still take on only a small portion of child-rearing responsibility. Even men who might be interested in staying home with children for a spell often resist taking advantage of policies such as paternity leaves, which they feel send the wrong signals to employers. Therefore, change will only occur if upper-level management in these employing institutions gives credible promises that there will be no negative repercussions in response to decisions to take advantage of childcare benefits.

Other advanced countries, Sweden most dramatically, have national policies aimed at equalizing male and female participation in both child rearing and work. Sweden’s Equal Opportunity Act of 1992 requires employers to achieve a well-balanced sex distribution in many jobs and to facilitate combining work and family responsibilities. Paid maternity or paternity care, at 80 to 90 percent of the salary, is mandated for 12 months. Sweden falls short of requiring men to take some part of this 12-month leave, but statistics show that about 70 percent of fathers take some time off and that these leaves have recently been getting longer. Given the contentiousness that marked congressional debate and approval of the Family and Medical Leave Act of 1993, it is unlikely that this type of workplace policy will be replicated in the United States.

The mentoring gap

Lack of good mentoring is more problematic for women than men, because women are less likely to be mentored than men and because the effects of mentoring on retention and performance are greater for women. Sex disparity in mentoring is greatest in academic institutions, where mentoring tends to be quite informal and thus arises naturally between male professors and male students. With more female professors, female students may find that developing a mentoring relationship is becoming easier. However, because the sex ratios of science professors continue to be highly unbalanced, formal mentoring programs for female science students, which have been growing in number during the past 10 years, should continue to be set up and supported in all academic institutions. Then women who are having trouble developing a personal relationship with a professor can be directed to professors or graduate students who are willing to take on the role of mentor. A variety of universities now use a program called multilevel mentoring, in which a junior biology major may mentor a freshman and also be mentored by a postdoc. Such a program creates a network of women to whom individuals can turn with questions. Social occasions for participants have also been successful in making the relationships more personal and developing ties with a whole community of women in science. These activities need not be limited to women although, because of the ease with which men seem to develop these relationships in academe, female mentoring programs may be sufficient.

In industry, men and women are equally likely to be mentored, and mentoring relationships generally develop in organizations in which mentoring is the cultural norm or where formal mentoring programs have been put in place. Again, for these institutions, mentoring programs are most likely to take hold when upper-level management puts its weight behind them.

The individual/field mismatch

Mismatches between an individual’s interests and the requirements of the scientific career are addressed in some of the policies advocated above. Good career counseling for degree recipients in the different scientific disciplines is likely to ward off bad matches that result from uninformed expectations. Mentoring relationships and well-developed networks of scientists with similar interests are likely to increase the personal connections that a given scientist makes with other scientists, thus reducing feelings of isolation. The trend toward interdisciplinary work during the past 20 years should give the individual scientist the opportunity to choose areas of work in which the science itself can be connected to a bigger picture. NSF and private foundations such as the Alfred P. Sloan Foundation have taken the lead in funding broad multidisciplinary research efforts. However, universities have historically had fairly rigid disciplinary boundaries. In order for scientists to feel free to participate in these interdisciplinary projects, the reward and promotion processes of employing institutions may have to be restructured to value this type of research.

In response to the finding that permanent exit is higher for men and women who are in fields that are changing at rapid rates, institution-sponsored skill update and training programs can help alleviate stresses associated with change. NSF sponsors programs for women who have left science to help them rebuild skills for reentry. These types of programs help ensure that temporary exit remains temporary. Training programs and skill updates are especially important in academic institutions, in which separation is not an option for tenured employees who feel that their skills have become out of date. Organizations such as the Mellon Foundation have been instrumental in supporting programs of career development for professors at all levels in liberal arts colleges. Many private companies do not engage in wide-scale training of existing employees in new techniques and knowledge. Companies may be comfortable with the loss of older employees who are not willing to update their skills because new employees, fresh out of the university, already have the updated skills and are cheaper than more senior employees. But if the pool of new hires becomes insufficient to replace this attrition, companies will have to face this issue head on.

Pursuing a science career should not be a matter of choosing hardship and sacrifice. In addition to interesting and challenging work, science careers should offer a strong support network, the possibility of having a real family life, an income throughout the career that allows a comfortable family lifestyle, and possibilities for continuous advancement and development. Currently, many scientists feel that science careers are falling short in one or more of these dimensions, both in absolute terms and in relation to alternative careers that are attracting bright and talented young men and women. The full scientific community in combination with government policymakers must mobilize for change in the scientific workplace. The future of the United States as a world power depends on their success.

Asian countries strengthen their research

The global scientific landscape is changing. During the past decade, many governments, convinced that their economic futures lay with knowledge-based economies, sought to strengthen national research and education. Increased foreign scientific competitiveness may be little noticed from within the U.S. research community, whose output still dwarfs that of any other country. Nevertheless, in aggregate these shifts are beginning to have an impact on U.S. research.

Although in many countries cultural and economic barriers still hamper scientific achievement, foreign science policy goals are clear. Thus, hurdles are likely to be overcome, and scientific progress is likely to accelerate. U.S. scientists will face intensified competition for the best students, corporate research support, space to publish in the top journals, and patents. Inevitably, this will reduce the perceived achievements of younger generations of U.S. scientists. Although they will work far harder than previous generations, they will not command the same dominating position in world science as did their predecessors.

The graphs below plot the rate of growth in key science and technology indicators in the United States and a group of science-minded Asian countries: Japan, Taiwan, South Korea, Singapore, China, and India. All the figures illustrate graphically a country’s rate of growth relative to a base year. Absolute values for the final year are included numerically below the graph. Although in most cases the absolute values are relatively small, the sometimes astonishing rates of growth deserve careful attention. Data are drawn from the National Science Foundation, CHI Research, and the Organization for Economic Cooperation and Development. A more complete treatment of the subject can be found at www.aaas.org/spp/rd/hicks404.pdf.

Growth in Gross Domestic Expenditures on R&D, 1991-2001

R&D spending

U.S. R&D spending rose by about 50 percent between 1995 and 2001. But this rate was exceeded by Taiwan, Korea, and especially Singapore and China, where spending more than doubled and tripled, respectively. Nevertheless, at $275 billion in 2001, U.S. spending dwarfs that of all other countries.

Growth in total researchers (full-time equivalents), 1991-2001

Total researchers

The number of researchers in Asia has grown strongly in recent years, though the total numbers are still small compared to the United States. One interesting research question: To what degree does the recent growth in Asia reflect an increase in home-grown researchers versus the repatriation of researchers from developed countries, including the United States?

Growth in doctoral degrees awarded, 1986-1999

Doctoral degrees awarded

The number of doctoral degrees in China has exploded more than 50-fold since 1986. Although Chinese science education is recognized for its rigor, at the Ph.D. level respect for authority and the spirit of conformity still handicap Chinese science, according to a recent Nature article. In spite of this, the dropoff in the number of non-U.S. citizens awarded degrees in the United States does seem related to the growth in degrees awarded in Asian countries.

Where do Asian students study for their PhD?

Where do Asian students study for their Ph.D.?

The drop in the number of degrees earned by Asian students in the United States and the difficulty in convincing Americans to pursue Ph.D.s in the sciences pose a potential threat to the vitality of U.S. science.

Growth in number of papers in Science Citation Index, 1986-1999

Papers published

Strong growth in Asian output of scientific papers affects the United States, because space is limited in the top journals. If output elsewhere, including Europe, strengthens in quality and quantity faster than the journals grow, U.S. scientists will inevitably experience more rejections of their papers.

Growth in U.S. patents invented in Asia, 1986-2003

U.S. patents awarded to Asians

These data reveal the maturity of the U.S. system as compared with Asian countries that experienced growth of up to 100 times in the number of patents issued to its inventors since 1986. Again, the United States dominates in the number of patents issued to its inventors, as it should in its own system. However, the assumption that U.S. scientists and engineers will own the key technologies of the future may need to be carefully examined.

Archives – Spring 2004

Photo: NAS Archives

Atoms for Peace Award

Neils Bohr, the Danish physicist who received the Nobel prize for his work on the structure of the atom, received the first Atoms for Peace Award on October 24, 1957 at the National Academy of Sciences in Washington, D.C. Bohr is on the left, with President Eisenhower in the middle, and Massachusetts Institute of Technology president James R. Killian on the right.

Bohr, who worked on the development of the atomic bomb at Los Alamos, became active in the campaign to stop the spread of nuclear weapons and to promote international cooperation in the control of nuclear technology.

The award was created by the Ford Motor Company in memory of Henry Ford and his son Edsel to honor individuals who advanced Eisenhower’s goal of promoting the peaceful use of nuclear technology. Killian was chairman of the award’s board of trustees.

The award’s significance was reflected by the stature of the program’s participants. President Eisenhower spoke briefly, followed by Henry Ford II and physicists John Archibald Wheeler of Princeton University and Arthur Holly Compton of Washington University.

The New Knowledge Economy

The Gifts of Athena: Historical Origins of the Knowledge Economy, by Joel Mokyr. Princeton, N.J.: Princeton University Press, 2002, 376 pp.

The past decade has seen increasing realization that the U.S. economy is powered not by the old factors of production–land, labor, and capital–but by new factors: knowledge and innovation. This understanding is reflected in the resurgence of interest in economists, such as Joseph Schumpeter, who placed entrepreneurship and research at the center of growth, along with the development of the “new growth economics” that explicitly focuses on the role of knowledge in stimulating growth. Yet federal policymakers and their neoclassical economics advisers remain largely focused on old economy factors: notably, responding to the business cycle and raising or lowering personal income taxes.

It is in this context that The Gifts of Athena is important. In it, historian of technology Joel Mokyr updates his work during the past decade on the role of knowledge in economic growth. Mokyr is clear about his goal: to determine how “new knowledge helped create modern material culture and the prosperity it has brought about.” It is not just all knowledge that he is interested in, however. He specifically focuses on “useful knowledge”; that is, “the equipment we use in our game against nature.” Mokyr differentiates between two types of useful knowledge: what he calls “propositional knowledge,” which focuses on how nature works; and “prescriptive knowledge,” which focuses on how to use techniques. The former is not embodied just in science but in all kinds of knowing about how the world works. An example would be the development of the laws of thermodynamics. The latter is embodied in technical manuals and other “cookbooks,” but also in the technologies themselves. An example would be the technical knowledge needed to build a working steam engine.

After describing these two kinds of knowledge, Mokyr sets about to explain why the British Industrial Revolution happened when it did. He argues that before this period, which he terms the “Industrial Enlightenment,” there had been inventions and innovations, but they had not coalesced to produce what Walt Rostow referred to in the 1960s as “take-off.” Mokyr argues that before 1800, much of the technological progress was in the area of prescriptive knowledge and led, in particular, to singleton techniques. He lists Jenner’s 1796 discovery of the vaccination process as an example, because it led to no further vaccinations until the triumph of the germ theory 100 years later. This was because, Mokyr claims, there was no underlying propositional knowledge to guide further work. In his words, “Many societies in antiquity spent a great deal of time studying the movements of heavenly bodies, which did little to butter the turnips.”

It was the “widening of the epistemic bases after 1800” that signaled “a phase transition or regime change in the dynamics of useful knowledge.” In particular, the development of positive feedback between propositional knowledge and prescriptive knowledge led to powerful innovation effects. Indeed, Mokyr argues that the Industrial Revolution of the 19th century was built on the scientific revolution of the 17th century and the Enlightenment movement of the 18th century. It was not necessarily that scientific breakthroughs led to the Industrial Revolution, but rather that more easily transmitted and formalized knowledge, especially propositional knowledge, made innovation easier. New technical and scientific societies; new journals, handbooks, and encyclopedias; cheaper and more widespread postal services; and accepted measures of weights and standards all contributed to the spread of useful knowledge.

Moreover, the scientific revolution of the 17th century led to a focus on experimentation and rationality even in technical nonscientific fields. Mokyr makes an important and usually overlooked point about the Industrial Revolution when he argues that it was not so much driven by lone individual heroic inventors but by a social network for innovation. In this case, it was “at most a few thousand people who formed a creative community based on exchange of knowledge.” Indeed, he argues that a key to innovation and growth is the development of institutions that foster communication and trust between individuals who develop propositional knowledge and those who make things using it. In doing so, he takes the neoclassical economists to task for failing to deal very well with “the efficiency of the knowledge production function, that is, the ease with which the efforts are transformed into invention.”

Perhaps the book’s most important contribution is its discussion of the political economy of knowledge. Mokyr makes a compelling case that in contrast to the reigning paradigm of neoclassical economics, which tends to give short shrift to issues such as politics, values, and institutions, technological advance is far more influenced by these factors than is commonly realized. Citing a long list of cases, he details a litany of factors that can get in the way of technical progress, including religion, government bureaucracies, unions, companies threatened by the innovators, broad antitechnology movements, and consumer groups.

For example, Mokyr documents how an 18th-century guild in Prussia went so far as to issue an ordinance laying down that no artisan “shall conceive, invent, or use anything new.” He argues that resistance to technological change stems from two sources that aid and abet each other: the economic interest of the technological status quo, and the resistance of intellectuals who fear new technology. Some observers might claim that these forces were only a factor in medieval times or in less-developed societies, but the forces of reaction today are by no means weak. Such forces include the Bush administration’s ban on therapeutic cloning; environmentalists’ opposition to biotechnology; the widespread resistance to e-commerce by affected business interests such as car dealers, real estate agents, and optometrists; and resistance by privacy advocates to a whole host of new technologies. It is clear that resistance to progress is alive and well in the early 21st century.

For all of its strengths, though, the book suffers from several limitations. For a book of economic history, it is surprisingly dry and weak on interesting and engaging descriptions of historical events. And there is a certain amount of repetition and longwindedness. As a result, the book is often hard slogging. The book also is overly focused on England in the 18th and 19th centuries, perhaps because of what Mokyr’s past work has covered. The industrial transformations in the United States during the late 1800s and mid-1900s are given relatively short shrift, although an analysis of them could do much to support his arguments.

In addition, Mokyr undervalues the extent to which there were radical innovations in the 20th century, arguing that the science and technology of that time produced mostly “microinventions.” Yet, the development of the integrated circuit, lasers, radar, antibiotics, mass-production assembly, biotechnology, and the Internet clearly were major innovations.

Mokyr also makes a number of claims that are dubious at best. For example, in citing the failure of the Tucker automobile in the 1950s, he claims that venture capital is “a powerful tool for vested interests keeping out innovators.” On the contrary, one of the reasons why innovators in the United States have emerged so vigorously to challenge competitors, especially as compared to other nations, is precisely because they could turn to independent venture capital firms for financing.

Finally, although Mokyr argues convincingly that “useful knowledge” networks are a key to growth, this predominant focus on knowledge fails to explain why Britain was the site of the Industrial Revolution, while France, which arguably had a deeper science base, was not. He touches on the fact that knowledge was seen as being in the service of national interests in France, whereas in England it was more commercial. But this is an important factor that deserves much more discussion than the few pages it gets. In other words, although the broad-scale development of useful knowledge is key, without the right institutional systems and culture to enable this knowledge to be exploited in commercially useful ways, the results will be limited.

Notwithstanding these limitations, The Gifts of Athena is a valuable addition to the growing literature on “knowledge economics” and to the movement to transform economic policy so that it more explicitly supports knowledge-led growth. Although Mokyr does not suggest much in the way of policymaking, the book does lead to a number of policy considerations. Most important is the idea that to grow the economy, policymakers should concentrate on expanding knowledge (including research and technology) and on expanding the ways in which it is diffused.

Policymakers also need to recognize that a key form of capital in today’s “new economy” is not just financial capital, or even knowledge capital, but social capital. As Jane Fountain of Harvard’s Kennedy School of Government and I have argued in a Progressive Policy Institute report, Innovation, Social Capital, and the New Economy, there have been dramatic changes in the nature of innovation. Notably, innovation is increasingly produced in networks, and enhancing social capital is key to expanding such network-based innovation. If a new system of innovation is to fully emerge and prosper, then federal science and technology policies, which helped create the broad institutional contours of the postwar R&D system, must now be adapted to support the new institutional relationships among industry, universities, and government.

We propose several specific steps that Congress should take. We advocate expanding the research and experimentation tax credit to provide a flat 20 percent credit (as opposed the current incremental credit) for any industry expenditures on research consortia and for research partnerships with universities or federal laboratories. Second, in order to draw university and industry researchers closer together on common challenges, we advocate establishing an Industry Research Alliances Challenge Grant Initiative to help support industry-led research alliances. Industry members would establish technology “roadmaps,” and on the basis of these maps, companies would invest in research conducted at universities or federal laboratories. In addition, we propose that Congress establish a State Technology Innovation Challenge Grant Initiative to coinvest with states to support regionally based innovation partnerships between small- and medium-sized firms and universities or federal labs.

Supporting more research, especially collaborative research, is important. But it is equally important to ensure that policymakers are vigilant in fighting today’s neo-Luddites who, because of self-interest or ideological reasons, would place roadblocks in front of innovation. An array of technical areas (stem cells, agricultural biotechnology, new information technologies, and automation technologies, to name but a few) now face significant opposition from such forces. Mokyr makes it clear that the societies in the past that best resisted regressive forces were those that progressed the fastest. By providing a detailed view of how knowledge-based development has worked in the past, his book provides a valuable service.

Book Review: Unconventional weapons

Unconventional weapons

Ultimate Security: Combating Weapons of Mass Destruction, Janne E. Nolan, Bernard L. Finel, and Brian D. Finlay, eds. New York: Century Foundation Press, 2003, 312 pp.

Jonathan B. Tucker

Since the end of the Cold War, the focus of U.S. national security concerns has shifted from the former Soviet superpower to a diverse array of “rogue states” and terrorist organizations that are hostile to the United States and possess or actively seek nuclear, biological, or chemical (NBC) arms. This new emphasis has been accompanied by the growing ascendance of coercive approaches to combating the spread of unconventional weapons, including the interdiction of shipments on the high seas, ballistic missile defense, and military strikes.

Although the Clinton administration launched the Defense Counter-Proliferation Initiative to complement traditional nonproliferation tools such as treaties and diplomacy, coercive approaches to combating the spread of NBC weapons have reached an apotheosis under President George W. Bush. Shortly after taking office in January 2001, the Bush administration repudiated several arms control accords, including the Anti-Ballistic Missile Treaty, the Comprehensive Test Ban Treaty, and a draft inspection protocol for the Biological Weapons Convention. Then, in response to the terrorist attacks of September 11, 2001, the administration issued a new National Security Strategy in September 2002 stating that the United States “will, if necessary, act preemptively . . . to eliminate a specific threat,” such as a hostile regime that seeks unconventional arms and sponsors terrorism. This doctrine saw its first application six months later with the U.S. invasion of Iraq on March 19, 2003.

Ultimate Security, a collection of essays by several leading nonproliferation scholars and practitioners, alleges that the administration’s narrow focus on coercive strategies to combat the spread of NBC weapons is counterproductive and argues for the return to a more cooperative, regime-based, and internationalist approach. While reserving collective military action as a last resort, the authors call for drawing on the entire “tool box” of nonproliferation instruments, from economic sanctions and export controls to multilateral treaties and diplomacy.

The editors of Ultimate Security clearly did not aim to provide a comprehensive and politically balanced overview of the nonproliferation field. With the exception of a chapter by British scholar Joanna Spear on divergent U.S. and European approaches, the contributors are all American, and all but one embrace a center-left political philosophy. In addition, the focus is primarily on nuclear weapons. Rather than providing a tour d’horizon, the book offers a critique of current Republican policies and a roadmap for alternative strategies.

As with any edited volume, the individual essays vary in quality and originality. Although the standard here is generally high, two contributions fall short: A chapter by William Keller on economic globalization and the spread of conventional weapons technologies seems out of place in a book devoted to NBC threats, and a chapter by Jessica Stern on bioterrorism is too narrowly focused and fails to address the broader issues of chemical and biological disarmament.

Another flaw of the book is that it employs throughout the popular but misleading term “weapons of mass destruction” (WMD), which conflates into a single category three types of arms–nuclear, biological, and chemical–that have very different technical characteristics, physical effects, and degrees of lethality. For example, WMD can refer to a thermonuclear warhead capable of destroying an entire city or to a chemical artillery shell that could kill a few dozen people. Accordingly, a growing number of policy analysts and scholars in the nonproliferation field prefer the terms “unconventional weapons” or “NBC weapons.”

The first part of Ultimate Security includes a chapter by Amy Zegart of the University of California at Los Angeles on the organization of the various executive branch agencies involved in arms control and nonproliferation decisionmaking. According to her analysis, the interagency process is plagued by chronic turf battles and a lack of senior leadership, accounting for Washington’s persistent failure to craft coherent and effective policies. In the absence of a single coordinating body or “nonproliferation czar” with real power and budget authority, she argues, policymaking has inevitably been dominated by the most powerful agency–the Pentagon, reinforcing the ascendancy of military over diplomatic approaches.

Most of the subsequent chapters in the book examine the various measures in the nonproliferation tool kit. Joseph Cirincione of the Carnegie Endowment for International Peace assesses the current health of the 1968 Nuclear Non-Proliferation Treaty (NPT), under which the five declared nuclear powers–Britain, France, China, Russia, and the United States–pledged to reduce and ultimately eliminate their arsenals in exchange for a commitment by other states not to acquire them. Despite a relatively small number of “cheaters and abstainers,” such as North Korea, India, Pakistan, Iran, and Israel, he argues that the NPT has been successful at preventing the wider spread of nuclear weapons. Cirincione warns, however, that the forbearance of the non-nuclear states is directly linked to the willingness of the five declared nuclear powers to take concrete steps toward disarmament, as required under Article VI of the NPT. In his view, a discriminatory arrangement that “entitles” five states to possess nuclear arms while denying them to others is not sustainable as long as these weapons bestow real power and prestige, such as permanent membership on the United Nations (UN) Security Council. Indeed, one of India’s motivations for joining the nuclear club was to be recognized as a major player on the world stage.

Unfortunately, the five declared nuclear powers have shown little willingness to abandon their reliance on such weapons. Indeed, the Bush administration recently moved in the opposite direction by designating funds in the 2004 Pentagon budget to study a low-yield nuclear warhead capable of destroying deep underground bunkers. Cirincione contends that unless the currency of nuclear weapons in international affairs is devalued, additional countries will decide that their security is best served by building their own arsenals. Exacerbating these concerns is the failure of the administration to offer “a convincing strategic framework to replace the treaties it shuns.”

In a follow-on essay on military approaches to nonproliferation, Robert S. Litwak of the Woodrow Wilson International Center critiques the concept of rogue states whose leaders are irrational and undeterrable. Simply branding hostile proliferators as pariahs, he argues, has made it impossible to tailor effective policies toward the diverse countries subsumed under the rogue state category, such as North Korea, Iran, and Syria. Although the Clinton administration recognized this problem and decided to drop the term rogue state from the official lexicon, President Bush revived it. Litwak also examines a series of historical case studies to assess the value of military responses to proliferation and concludes that, “force is no less problematic and uncertain in terms of yielding desired outcomes than its nonmilitary alternatives.”

As an example of a noncoercive nonproliferation strategy, Rose Gottemoeller of the Carnegie Endowment describes the Pentagon’s Cooperative Threat Reduction (CTR) program, which provides financial and technical assistance to Russia and other Soviet successor states to dismantle their cold war stockpiles of nuclear or chemical weapons. CTR has been augmented by other U.S. government efforts to secure fissile materials and dangerous biological pathogens and to prevent the brain drain of expertise from the Soviet weapons complexes. Gottemoeller argues that although these programs have been remarkably successful and should be extended to other countries, conservative opponents in Congress and the executive branch have undermined them.

Problematic account

The book’s most problematic contribution is by David Kay, who recently resigned as head of the Iraq Survey Group searching for evidence of that country’s pre-war NBC weapons programs. Kay’s chapter assesses the activities by UN inspectors in Iraq, which took place in two phases. The UN Special Commission (UNSCOM) operated for seven years after the Persian Gulf War, from June 9, 1991, until its withdrawal on December 16, 1998, in response to growing Iraqi noncooperation; its successor, the UN Monitoring, Inspection and Verification Commission (UNMOVIC), was on the ground in Iraq for only three and a half months, from November 27, 2002 until March 18, 2003, the day before the U.S. military invasion. Unfortunately, Kay’s chapter was written in mid-2001 and not updated, so that it focuses exclusively on the UNSCOM experience without the benefit of more recent information.

Although Kay admits that UNSCOM scored a number of successes in the face of Iraqi duplicity and deception, he contends that it ultimately failed to eliminate Saddam Hussein’s prohibited weapons programs. He attributes this failure to several factors: UNSCOM gave too low a priority to operational security and was penetrated by Iraqi intelligence; it was too eager to declare Iraq free of prohibited weapons and allow a transition to a less intrusive and politically more comfortable monitoring program; and it depended on the political unity of the five permanent (P-5) members of the UN Security Council. When divergent interests and rivalries caused P-5 support for the inspections to erode, these countries also became less willing to share vital intelligence information with UNSCOM. In view of this record, Kay is pessimistic about the ability of the United Nations to pursue weapons inspections in the future. “Coercive disarmament, by its very nature,” he concludes, “is an unnatural act for an international organization.”

Because Kay’s account lacks historical perspective, he shortchanges the accomplishments of the UN weapons inspections. Given the failure of the Iraq Survey Group to find any trace of the prohibited weapons that supposedly had eluded UNMOVIC before the war, the work of the UN teams–which the Bush administration had scorned as incompetent–has belatedly garnered respect. Kay’s lack of familiarity with the history of the UNSCOM biological investigation also causes him to underestimate its effectiveness. He claims that UNSCOM failed to uncover the Iraqi biowarfare program and that it was exposed by a senior Iraqi official, Gen. Hussein Kamel, who defected to Jordan in August 1995. In fact, by carefully piecing together a mosaic of evidence from multiple sources, UN inspectors obtained compelling proof of the mass production of anthrax bacteria and botulinum toxin despite Baghdad’s determined concealment efforts. The Iraqis were therefore forced to admit the production program on July 1, 1995, more than a month before Kamel’s defection.

Ultimate Security concludes with a chapter by the editors critiquing the Bush administration’s policy of selectively lambasting rogue states for pursuing nuclear arms while tolerating the acquisition or retention of these weapons by friendly countries such as India, Pakistan, and Israel. In the administration’s view, the editors write, “some states legitimately possess nuclear weapons . . . because they are law-abiding and would contemplate their use only in self-defense (or as part of a collective security alliance). States seen as criminal, by contrast, forfeit the right to acquire or possess weapons of mass destruction, not so much because of international strictures but because of their belligerence toward the West.” The editors argue that this double standard arouses bitter resentment in many parts of the world and is ultimately detrimental to U.S. interests. Instead, they argue, proliferation should be viewed as an intrinsically international problem that is best addressed by multilateral treaties and institutions and a collectively shared set of norms.

Since Ultimate Security was published in late 2003, three dramatic developments have occurred: the failure to find NBC weapons in Iraq, calling into question the administration’s rationale for preemptive war; Libya’s decision to abandon its nuclear and chemical weapons programs, raising hopes that proliferation in the Middle East region might be reversed; and the revelation that Pakistani nuclear scientist A. Q. Khan directed a vast smuggling network in uranium-enrichment technology, heightening fears that nuclear weapons could fall into the hands of terrorists ruthless enough to use them. Exposure of the Khan network has increased the importance and urgency of international cooperation to halt nuclear trafficking and to strengthen the physical security, control, and accounting of fissile materials worldwide. These recent developments have made the basic arguments in Ultimate Security even more timely. If, as seems likely, the 2004 presidential campaign includes a serious debate on the future direction of U.S. nonproliferation efforts, the book will provide some useful grist for the Democratic policy mill.


Jonathan B. Tucker ([email protected]) is a senior researcher with the Center for Nonproliferation Studies at the Monterey Institute of International Studies’ branch office in Washington, D.C., and a visiting lecturer at the Woodrow Wilson School at Princeton University. He is the author of Scourge: The Once and Future Threat of Smallpox (Grove Press, 2002).

Talk to Me

No president has ever lacked for free advice. Everyone has some policy wisdom to share. But the Bush administration has been plagued with advice-related complaints. It began with receiving secret advice on energy policy from the energy industry, continued with not asking the scientific community for its advice on global warming, and went on to ignoring the advice that it eventually asked for. The administration’s Office of Management and Budget proposed that it would seek more rigorous scientific guidance in the review of regulations, but it ran into trouble with its position on whose advice could be trusted. Now the Union of Concerned Scientists has issued a report about what it finds wrong with the way the administration is selecting members for its advisory committees, and a group of distinguished scientists released a statement making essentially the same point.

Of course, many of the Bush critics are hardly acolytes of rigorous science. In the March 8, 2004, issue of The Nation, Robert F. Kennedy, Jr., writes a scathing indictment of the “flat-earthers” in the Bush administration and their assault on science. A good choirboy, he sings the praises of the labcoats: “Science, like theology, reveals transcendent truths about a changing world. At their best, scientists are moral individuals whose business is to seek the truth.” As the tears began to form in my eyes, I read on to discover that Kennedy has worked for the Natural Resources Defense Council, which loudly demonstrated its own abuse of science a few years ago in its over-hyped and under-reviewed report on the dangers of the pesticide Alar. Apparently, we do not want to go too far in trusting the judgment of nerds.

The distinguished scientific leaders who signed the statement criticizing the Bush administration are undoubtedly right that the administration has taken some regrettable actions. (Am I going to dismiss a group of Nobel laureates and former government officials that includes quite a few Issues contributors?) The problem is that the statement can be read as a squabble between Democratic and Republican scientists. David H. Guston, E. J. Woodhouse, and Daniel Sarewitz made a more fundamental point in “A Science and Technology Policy for the Bush Administration” (Issues, Spring 2001): “The real need is for better integration of science policy with other types of social policy, rather than for greater isolation of science policy.”

As long as scientific advice is treated as some form of transcendent truth that exists outside the give and take of political negotiation, there will be a premium on appointing committees that come to predictable conclusions. Once the scientific advice has been offered, the scientists can be sent home so that the political players can get down to the real work of crafting policy. If, instead, science is simply one of the voices at the table where policy is discussed, it will be more influential and less vulnerable to political grandstanding. Let’s take science off the pedestal. It needs less reverence and more power. Scientists should not aspire to be listened to. They need to be talked to and engaged in argument. The real problem with this and previous administrations is the failure to fully integrate science into the larger process of setting national policy across a broad spectrum of concerns from health to defense and education to the environment.

Forum – Spring 2004

Pornography on the Net

As Dick Thornburgh and Herbert Lin note in “Youth, Pornography, and the Internet” (Issues, Winter 2004), the Supreme Court took an important step in the legal fight to protect children from online pornography with its 2003 decision upholding the constitutionality of the Children’s Internet Protection Act (CIPA), a federal law that requires public libraries that rely on federal funds for Internet use to install filtering devices on library computers to protect children from the darker side of the Internet–pornography and obscenity.

Chief Justice William Rehnquist, writing for the majority, clearly understood the responsibility that the government has in protecting the interests of our children. “Especially because public libraries have traditionally excluded pornographic material from their collections, Congress could reasonably impose a parallel limitation on its Internet assistance program,” he wrote. In other words, if a child cannot walk up to a librarian and request a copy of Hustler magazine, taxpayer dollars should not be used to fund a library’s Internet program that enables a child to access Hustler-like material (or worse) online.

Now the Supreme Court is considering another critical law designed to protect young people from online porn. At issue is the Child Online Protection Act (COPA), passed by Congress in 1998 and the subject of legal challenges ever since its inception. Simply put, the measure would require operators of commercial Internet sites to use credit cards or some form of adults-only screening system to ensure that children cannot see material deemed harmful to them. If operators don’t comply, they could face fines and jail time.

We filed an amicus brief with the Supreme Court on behalf of 13 members of Congress–including one of the cosponsors of COPA–asking the court to overturn a federal appeals court decision declaring COPA unconstitutional. COPA does not pose a constitutional crisis but merely represents a careful and sound response by Congress in addressing this egregious problem.

The requirements imposed by COPA will not destroy the Internet and do not infringe on the rights guaranteed by the First Amendment. COPA simply places a reasonable restriction on access to commercially marketed indecency to prevent access to such materials by minors. In approving COPA, Congress sought to ward off a threat once recognized by Franklin D. Roosevelt: “the nation that destroys its soul destroys itself.”

The Supreme Court applied logic and common sense last year when it upheld the constitutionality of CIPA. Opinion-makers coast-to-coast understood the important ramifications of that decision. One editorial concluded that, “protecting children from filth is the correct and conscientious thing to do. Filth is filth. It does not become any more acceptable, it does not gain any value, simply because it arrives electronically.”

It is our hope that the Supreme Court will now follow in its own footsteps and find COPA to be constitutional as well, delivering a powerful one-two punch in the legal fight to provide online protection for our children.

JAY SEKULOW

Chief Counsel

American Center for Law and Justice

Washington, D.C.

www.aclj.org


Environmental statistics

I commend H. Spencer Banzhaf’s analysis of the numerous benefits of establishing a Bureau of Environmental Statistics within the Environmental Protection Agency (EPA) (“Establishing a Bureau of Environmental Statistics,” Issues, Winter 2004). For the past 30 years, EPA has continued its role as a regulatory agency without the benefit of such a statistical bureau. Many federal agencies use statistical bureaus as a basis for policy decisions and to track performance results mandated by the Government Performance and Results Act. Without valid statistical data on the state of the environment, EPA cannot determine whether its regulatory efforts are producing environmental results. I agree with Banzhaf that the use of environmental statistics is long overdue. I will continue my efforts to pass H.R. 2138, elevate EPA to a department, and establish a Bureau of Environmental Statistics.

REPRESENTATIVE DOUG OSE

Republican of California

Chair, House Government Reform

Subcommittee on Energy Policy

www.reform.house.gov/EPNRRA


H. Spencer Banzhaf eloquently articulates the rationale for an expanded and well-focused national commitment to reporting on the condition of the environment. The Heinz Center’s 2002 State of the Nation’s Ecosystems report (http://cfinsights.issuelab.org/resources/11234/11234.pdf) is based on a similar conclusion: that the nation needs periodic, high-quality, nonpartisan reporting on the condition and use of our lands, waters, and living resources. Unfortunately, we found that it was not possible to provide national-level data for nearly half of a suite of carefully chosen indicators, and there were important gaps in what could be reported for nearly a quarter more. Banzhaf’s article highlights deficiencies in air and water pollution information; our work also identified important gaps in ecological information, such as the condition of the nation’s freshwater habitats; the amount of carbon stored in our forests, farms, and rangelands; the spread of non-native species; the size of the “dead zones” in our coastal waters; the amount of farmland affected by excess salinity; and the fraction of our nation’s aquifers declining because of overuse. Filling such gaps should be a key national priority.

Readers should be aware of two ongoing studies that will help policymakers understand the dimensions of the resources needed for improved reporting. The first, currently under way at the Heinz Center, will provide estimates of the costs of filling the data gaps identified in our 2002 report, along with information on which of these are seen by environmental practitioners as most important to fill first. The second, being undertaken by the General Accounting Office, will examine a wide range of environmental information sources and will ascertain whether these data sources appear to be stable or whether there are indications that scientific, technical, budgetary, or other mid- to long-term challenges may erode current reporting capabilities.

I also heartily endorse Banzhaf’s recognition that external input to such reporting efforts is vital. In any such effort, both the choice of what to report and the way in which the data are presented must be unassailably above suspicion of political or partisan influence. Experts from the business and environmental advocacy communities, academia, and government agencies worked together to produce, not simply advise on or endorse, the State of the Nation’s Ecosystems report. Broad involvement and transparency are crucial to any effort to institutionalize environmental and ecosystem reporting.

Finally, I would like to emphasize the importance–and not minimize the difficulty–of interagency and intergovernmental coordination in environmental reporting. Environmental and ecological data are collected by multiple federal agencies, by states and local governments, and by nongovernmental organizations. Ensuring that these entities “play well together” requires careful attention to the question of why any organization should modify its behavior to meet another organization’s goals. Improving the nation’s reporting capabilities will require careful attention to incentives, organizational imperatives, and (last but not least) money.

ROBIN O’MALLEY

Senior Fellow and Program Director

The H. John Heinz III Center for Science, Economics, and the Environment

Washington, D.C.

[email protected]


Climate change policy

Richard B. Stewart and Jonathan B. Wiener (“Practical Climate Change Policy,” Issues, Winter 2004) provide some interesting thoughts on moving beyond the Kyoto impasse. I agree with their recommendation that we need to take a comprehensive approach to limiting greenhouse gas emissions in order to enhance both environmental and cost effectiveness. I also share their view on the importance of finding ways to bring the United States on board. Stewart and Wiener suggest joint accession by the United States and China as a way forward. Their proposal does have the merit of enhancing the environmental effectiveness of the Kyoto Protocol and helping stabilize the price of permits on the international carbon market. It is certainly in the interest of the United States, because the participation of China would substantially reduce U.S. compliance costs and increase environmental effectiveness. But I doubt the prospects for this proposal for the following reasons.

First, although broad discussions and cooperation in the field of climate change continue between China and the United States, it is doubtful that China would be willing to discuss joint cap-and-trade arrangements. For historical reasons, China attributes great importance to maintaining unity of the Group of 77, and engaging in discussions on joint cap-and-trade arrangements with the United States may well be perceived as a threat to the solidarity of that group. Developing countries, including China, insist that industrialized countries should take the lead in reducing their greenhouse gas emissions before developing countries even consider taking on such commitments. With the U.S. withdrawal from the Kyoto Protocol and a very low scale of overall emissions reductions in the industrialized countries during the first commitment period, it is unclear whether developing countries would regard their wealthy counterparts as having taken the lead by the beginning of the second commitment period.

When it comes to negotiating developing-country commitments, it is in the interest of China to join with other developing countries and negotiate developing-country commitments under the climate convention. This will give China much more clout in the final collective bargaining to determine its emissions commitments. International climate negotiations in Bonn and Marrakech clearly demonstrate China’s devotion to the Kyoto Protocol. A comparison of China’s original positions and the final decisions in the Marrakech Accords clearly shows that China is willing to give on many issues in order to keep the Kyoto Protocol alive and that China continues to aspire to be recognized as a responsible member of the international community.

Second, the legitimacy of the U.S. insistence that it will rejoin the Kyoto Protocol or a follow-up regime only if major developing countries join as well is questionable. Given that the United States is the world’s largest economy and emitter of greenhouse gases, it has both the responsibility for the global climate problem and the ability to contribute to solving it. To have a significant long-term effect on global greenhouse gas emissions, a global climate regime eventually must include substantial participation by developing countries. But unless the United States has made sensible commitments itself, it does not have the moral right to persuade developing countries to take meaningful abatement actions.

Third, developing countries have been sensitive to commitment issues, and the U.S. position in the New Delhi climate conference makes the launching of a dialogue on broadening future commitments difficult. At Kyoto, the United States called for stronger action by developing countries, but in New Delhi declared such discussion about developing-country commitments premature. This would have long-term implications, because developing countries would defend their position using this argument in the future when being asked to take on commitments. This certainly complicates initiating discussions on joint accession by the United States and China.

Fourth, the U.S. withdrawal from the Kyoto Protocol does nothing but erode trust and reinforce the stalemate between the North and the South, and it is difficult to imagine that China and India would assume emissions targets before U.S. reentry into the Kyoto regime or a follow-up regime. Doing so would be perceived as rewarding the United States for disregarding the protocol.

Stewart and Wiener suggest that emissions targets and pathways be set in such a way as to minimize the sum of climate damage and abatements costs over time. This approach sounds very appealing theoretically. But the problem is that abating greenhouse gas emissions involves costs today, but benefits are delayed until the far distant future. This, combined with great uncertainty over estimates of climate damage, would prevent the approach from being implemented in practice.

ZHONGXIANG ZHANG

Senior Economist

Research Program

East-West Center

Honolulu, Hawaii

[email protected]


Richard B. Stewart and Jonathan B. Wiener proceed from sound premises: Climate change is a problem urgently requiring action; an emissions cap-and-trade system offers the best chance for obtaining broad international participation; and greenhouse gas (GHG) emissions limits should be comprehensive, covering all sources and sinks. The authors are spot-on in proposing to attract large-emitting developing nations by offering an emissions premium: an emissions cap set with headroom for development. This idea, pioneered well before the 1997 Kyoto conference, has not much entered the international discussions. It should. Stewart and Wiener also rightly emphasize the importance of engaging the United States in near-term domestic emissions reductions efforts such as the Climate Stewardship Act proposed by Sens. McCain and Lieberman. Without domestic U.S. action, it is unlikely that large-emitting developing nations would adopt emissions caps even with a premium.

The authors justly criticize intensity targets and technology subsidies, neither of which has any significant potential to reduce emissions or entice serious participation. They properly eschew taxes. (Policymakers should resist the temptation to convert cap-and-trade into a tax, as the so-called “safety valve” would do by simply allowing emitters to bust the cap for a fee.)

Although they argue that maximizing net benefits provides a ready guidepost for target-setting and on that basis deem the Kyoto targets arbitrary, the picture changes when seen through the prism of the objective of the 1992 United Nations Framework Convention on Climate Change (UNFCCC). The treaty’s objective, recently reaffirmed by President George W. Bush, is stabilization of atmospheric concentrations of GHGs at a level, and in a time frame, that avoids dangerous interference with the climate system. In the run-up to the 1997 Kyoto conference, top U.S. scientists had warned President Clinton that more than one degree of warming over the next century [stabilization of carbon dioxide at levels above 450 parts per million (ppm)] could be dangerous to vulnerable ecosystems. European scientists had proposed a “safe corridor” for the climate, suggesting that to avoid danger, warming should be limited to two degrees and a concentration target of 550 ppm. These analyses informed the negotiations, indicating to policymakers the importance of Kyoto-sized reductions in keeping stabilization options open.

If future targets are to be guided not only by the practical precepts put forward by Stewart and Wiener but also by the UNFCCC’s objective, optimization alone will not be an adequate tool for setting global targets. At a minimum, regional views differ. Regions that are economically dependent on coral reefs or other vulnerable ecosystems, or that face uncertain but potentially disastrous consequences from abrupt changes such as the possible shutdown of the Gulf Stream or disruption of water transport systems, are making the optimization calculus differently than others. Their scales are tipping in favor of bringing Kyoto and like measures into force as a first and precautionary measure. Although Stewart and Wiener’s recommendations mark an important step, they will need to be refracted through the objective of the UNFCCC if they are to provide not only a practical but an effective starting point for crafting future climate policy.

ANNIE PETSONK

International Counsel

Environmental Defense

Washington, D.C.

[email protected]


Viral trade

Laura Kahn’s “Viral Trade and Global Public Health” (Issues, Winter 2004) raises important concerns about the age of our current international health regulatory structure in light of emerging infections. She correctly points out the need to modernize the regulations and argues for more global standardization of the legal framework. She argues that this is necessary if we are going to successfully control infectious threats in an environment of rapid change.

Reducing the threat of emerging infectious diseases will require a significant effort on a global scale, and improvements in public health law and regulations are an important step. In fact, the World Health Association is currently working with its member nations to update their international health regulations, which were last modified in 1981. When they are finally adopted, it is hoped that a more modern and effective regulatory structure will be in place. There is, however, a more pressing and important problem that must be addressed, otherwise current efforts to update these regulatory authorities will be ineffective. We need a strong public health infrastructure globally, nationally, and locally. It is our first

Public health infrastructure is generally defined as people, properly trained with the tools and resources necessary to improve the public’s health. The global infrastructure for public health, like that in the United States, has been challenged for decades. In many parts of the world, basic infrastructure is at best tenuous. In an environment where we are one plane ride away from an infectious disaster, investing in an adequate infrastructure is an essential step. Although ensuring the adequacy of the legal authority to perform this work is one important component of an effective system, the capacity to quickly identify a emerging or reemerging infectious threat, track it, and contain its spread is the best defense against global disaster. In today’s world, preventing the disease from becoming endemic must also be a goal. Effective and rapid communication and coordination like that which occurred during the severe acute respiratory syndrome (SARS) outbreak of 2003 provides an operational framework to ensure success.

Virulent organisms do not know the rules of trade, diplomacy, or law. Properly crafted regulations are a small part of the solution. An adequate public health infrastructure is the key.

GEORGES C. BENJAMIN

Executive Director

American Public Health Association

Washington, D.C.

[email protected]


We have recently experienced several newly emerging viral infections around the world. Severe acute respiratory syndrome (SARS) claimed some 800 lives out of about 8,000 worldwide cases of SARS in the 2002-2003 outbreak. It is likely that people were infected with the SARS coronavirus, a causative agent of SARS, from exotic animals such as civet cats in Guangdong Province, China, at an early stage of the outbreak. The global outbreaks of SARS then occurred through human-to-human transmission.

In 1998-1999, there was an outbreak of mysterious encephalitis in Malaysia, claiming more than 100 lives. Nipah virus was identified as the causative agent. (Fortunately, no human-to-human transmission of Nipah virus has been reported.) We now know that the causative virus jumped from fruit bats, the true reservoirs, into a pig colony on a pig-breeding farm, resulting in an amplification of the virus in pigs. Then people with close contact with the farm contracted Nipah virus from the infected pigs. A transmission of monkeypox virus to prairie dogs in the United States from giant Gambian rats that were imported from West Africa as pets, a transmission of Marburg and Ebola viruses to nonhuman primates from still-unknown reservoirs, and a transmission of H5N1 influenza virus from migratory ducks to chicken colonies on farms were fundamental causes of outbreaks of these newly emerging viruses in humans.

All of these outbreaks have originated in developing countries. Deforestation, the destruction of nature, poor sanitary environments in animal colonies on farms, and high population density are associated not only with human economic activities but also with cultural, traditional, and recreational lifestyles. So many people depend on wood as an energy source. So many people hunt wild animals for food. These activities are considered to be fundamental origins of the emerging viral infections, so that these infections may be associated in part with problems of poverty.

In order to minimize the risk of the emergence of global viral infections in the future, there is no doubt that strict controls on animal trade and the establishment and enforcement of global health standards through incentives by the World Health Organization and World Trade Organization are needed. However, we also emphasize the importance of overcoming the so-called North-South problem, the economic disparity between developed and developing countries, as well. The process of combating the emerging viral infections is as difficult as the resolution process for the North-South problem.

MASAYUKI SAIJO

Masato Tashiro

National Institute of Infectious Diseases

Tokyo, Japan

[email protected]


Human spaceflight

John M. Logsdon’s “A Sustainable Rationale for Human Spaceflight” (Issues, Winter 2004) is an invitation to begin public discussions that can lead to a more successful space program that will be a source of the national pride he invokes.

The problem is that the declaration of President George W. Bush that Logsdon quotes in his first paragraph, “Our journey into space will go on,” has rallied few scientists, engineers, or laypeople. Many scientists still see no compelling reason for humans in space. (Even in the 1980s, the Nobel Laureate physicist Edward Purcell remarked to me that the energy needed to lift a person out of Earth’s gravitational well would feed him or her for a lifetime.)

For a substantial number of remaining space enthusiasts–and I know them from academia and science fiction conventions alike–the president’s plans are a disappointing unfunded mandate. The demise of the troubled Space Shuttle program threatens to curtail the life of what most consider NASA’s greatest hit, the Hubble Space Telescope. Meanwhile, rank-and-file space researchers have grown cynical. One laid-off engineer recently e-mailed me, after I published a commentary on Columbia in the Washington Post: “I can assure you that NASA and the federal government do not want applicants. It is downsizing and makes sure people of any skill are made to stay unemployed.” And 62 percent of respondents to an ABC poll opposed the president’s plan.

Even among space advocates, exploration is an imperative in search of a rationale. For some, it is a showcase of civilian scientific and technical prowess, of national Promethean prestige. For others, it expresses the destiny of all humanity to transcend its earthly limits and spread through the cosmos, transforming the surfaces of the Moon and planets into economically productive habitable zones. And for still others, it is a scientific frontier; they in turn are divided on whether the presence of human investigators in space is worth the risks. Finally, in popular culture, both robotic and human initiatives are another new extreme sport, a cosmic Monster Garage.

Without being informed enough to recommend policies, I propose that scientists, engineers, lay enthusiasts, and other knowledgeable people finally debate openly what allocation of research money promises to yield the greatest benefits for human knowledge and well-being. The space budget should be considered along with the exploration of the deep sea and other frontiers. Hazards should be addressed openly but in themselves should not keep people from space; we owe our safe late-industrial society to the reckless entrepreneurs of the 19th century. And it was Jerome Lederer, father of NASA’s safety program, who established the agency’s initially admirable record by forthrightly calling his office Risk Management.

Critics of NASA should also be more aware of its positive unintended consequences: the unjustly trivialized spinoffs, strongly documented in a 1972 report by the University of Denver, Mission-Oriented R&D and the Advancement of Technology. A scientifically vibrant space program will make unexpected discoveries more likely by helping gifted researchers from many disciplines and nations talk to each other. This has been the pattern from the development of radar (and much else) in the Massachusetts Institute of Technology’s chaotic Building 20 to the breakthroughs in nonlinear science at IBM.

A reformed NASA remains one of the most constructive ways to show the flag. The sustainability of our society needs (among other things) the scientific and technological innovation that it can help provide. National pride, like individual self-esteem, should thus be an outcome rather than a reason.

EDWARD TENNER

Senior Research Associate

Lemelson Center for the Study of Invention and Innovation

National Museum of American History

Washington, D.C.

Visiting Scholar

Department of the History and Sociology of Science

University of Pennsylvania

Philadelphia, Pennsylvania


Like the NASA worldview he represents, John M. Logsdon looks to the past in search of some rationale for further adventures in space. Just as NASA continues to invoke Christopher Columbus and President Bush invoked Lewis and Clark in his speech recommending Moon and Mars missions, so does Logsdon cite historical precedent, harkening back to the Cold War to recommend power and especially pride as reasons for sending humans to the Moon and Mars. For me, however, these missions evoke not pride but embarrassment, not power but dissipation. Wasting precious resources on stunts with no practical payoff smacks of a Roman circus, a public entertainment to amuse and distract.

We went to the Moon in the 1960s “for all mankind,” as the plaque we left there proclaimed. Presumably we would return to the Moon and fly on to Mars in the name of humanity as well. Then let the rest of humanity share the cost. The United States possesses about one-third of the world’s wealth. When the rest of the world is prepared to pay two-thirds of the hundreds of billions of dollars it would cost to send people to Mars, then the United States might argue that it was playing its proper role in a great human undertaking.

Until then, such an expedition by the United States looks more like potlatch than pride. Potlatch is the ritualistic behavior of some North American Indians tribes. The wealthiest and most powerful members of the tribes gather for occasional ceremonies in which they throw their most valued possessions into the fire. The winner is the one rich enough and secure enough to squander more of his possessions than any other. It is the native American version of what economist Thorstein Veblen called conspicuous consumption. People affect tennis attire in their daily rounds to suggest that they are rich enough to play all day instead of work, and they wear designer clothes to suggest that they can afford to pay more for their raiment than it is worth. Sending humans to Mars with current and foreseeable technology is a way of saying that we have so much wealth we can afford to squander it on fabulously expensive adventures with no practical return on investment. Far from being proud of this public circus, I am chagrined by its self-absorption and waste.

ALEX ROLAND

Department of History

Duke University

Durham, North Carolina

[email protected]


GM food fight

Jerry Cayford’s “Breeding Sanity into the GM Food Debate” (Issues, Winter 2004) provides an insightful overview of the big-picture issues in the clash between critics and advocates of biotechnology.

Cayford is exactly right in concluding that, for many critics of biotech, the central overriding issue is ownership and control, not food safety. But Cayford places too much emphasis, I think, on the patenting of plants as the single underlying issue at the heart of biotech industry control. Although not all biotech critics share our view, ETC Group has long opposed other forms of plant intellectual property monopoly such as plant breeders’ rights (known as plant variety protection in the United States) that also erode farmers’ rights and biodiversity and concentrate corporate power over plant breeding.

But intellectual property is not the only mechanism being used by corporate gene giants to achieve market monopolies and long-term control over new technologies. Industry is eager to develop post-patent monopolies: what ETC Group refers to as “New Enclosures.” The development of “Terminator” technology–genetic seed sterilization–is a prime example. Patents offer a legal mechanism to prevent farmers from saving and replanting proprietary seed. Terminator seeds, if commercialized, would offer a biological mechanism to prevent farmer seed-saving. Terminator technology is a stronger monopoly tool than patents; unlike patents, there’s no expiration date, no exemption for researchers, no discussion of compulsory licensing, and no need for lawyers. Those who believe that multinational seed corporations have abandoned their quest to develop suicide seeds are mistaken.

Given the emergence of new nanoscale technologies, we would argue that efforts to reform and resist intellectual property monopolies must go beyond “no patents on life.” Today, the capacity of scientists to manipulate matter is taking a giant step down, from genes to atoms. Nanotechnology refers to the manipulation of matter at the scale of atoms and molecules, the building blocks of the entire natural world. Whereas biotechnology gave us the tools to break the species barrier (to transfer DNA to and from unrelated organisms), nanotechnology enables scientists to shatter the barrier between living and nonliving. Nanobiotechnology refers to the merging of the living and nonliving realms to make hybrid materials and organisms–to integrate biological building blocks with synthetic materials and devices.

Worldwide, public- and private-sector nanotech funding is currently between $5 and $6 billion per year worldwide, and it’s difficult to identify any Fortune 500 company that isn’t investing in nanotech R&D. With the rise of nanoscale technology, will we see sweeping patent claims on chemical elements and the products and processes related to atomic-level manufacturing? Will it be possible to patent new elements? Will it be possible to modify an element and then patent the process and the product? Clearly, societal concerns about control and ownership of powerful new technologies must extend beyond the patenting of life.

Biotech and other emerging technologies cannot be understood outside of their social, economic, and political context. I agree with Cayford that, at the most fundamental level, biotech activists seek to defend diversity, democracy, and human rights.

HOPE SHAND

Research Director

ETC Group

Carrboro, North Carolina

www.etcgroup.org


Semiconductor challenges

Permit me to applaud Bill Spencer’s thoughtful “New Challenges for U.S. Semiconductor In

First, I concur with Spencer that preserving a robust semiconductor industry is crucial to the U.S. economy. The continued strong industry presence of the United States, with its stable government, established industrial base, world-class universities, hard-working labor force, tradition of manufacturing know-how, and unique innovation capability, is indispensable to the future of the world semiconductor sector. We need every nation’s intellectual and productive capital in order to stay on the demanding development pathways defined by the International Technology Roadmap for Semiconductors, which seeks to maintain the historical trend of doubling chip power every two to three years.

Second, Spencer is insightful in observing that innovative partnerships between industry and regional governments are part of the solution for maintaining this country’s ability to compete. SEMATECH has been able to vastly extend its work in extreme ultraviolet lithography (EUVL), due to support from the State of New York in establishing our advanced EUV Program with Albany Nanotech and the University at Albany. More recently, the State of Texas opted to partner with SEMATECH and state universities by agreeing to help establish an Advanced Materials Research Center to investigate future electrical and interconnect materials for silicon chips. In collaborating with SEMATECH in these programs, both state governments recognize an important truth: that the semiconductor field is the precursor and enabler of tomorrow’s employment industries, including nanotechnology and biotechnology. These nascent industries are expected to generate trillions of dollars in revenues, fueling R&D budgets and manufacturing requirements that will reverberate throughout the economy.

Finally, Spencer’s article makes a cogent argument for appropriate federal funding in an industry that has shown enormous return on investment, both economically and in national defense. To preserve a star economic performer and retain U.S. access to the application-specific chips needed for advanced defense systems, it makes eminent sense for Washington to take a renewed financial interest in the domestic semiconductor industry.

The semiconductor industry is evolutionary, complex, and increasingly global, but the nation that originated the technology should retain a firm presence in its future.

MICHAEL R. POLCARI

President and Chief Executive Officer

International SEMATECH

Austin, Texas

[email protected]


Coral reef protection

In “America’s Coral Reefs: Awash with Problems” (Issues, Winter 2004), Tundy Agardy argues persuasively that government needs to be smarter and more inclusive in its attempts to reverse serious declines in America’s coral reefs. From the Atlantic’s Florida Keys, Puerto Rico, and the Virgin Islands to the Pacific’s Hawaii, Guam, American Samoa, and the Northern Mariana Islands, the health and productivity of coral reefs have declined under steadily increasing pressure from overfishing and from land-based runoff of disease- and nutrient-laden silts and sewage. In more recent times, the decline has been exacerbated by increasingly severe summer heatwaves that have bleached and killed corals, symptomatic of global climate change. This degradation of coral reefs has already cost us dearly in terms of the quality of the experience we can have as snorkelers and divers, and in terms of the loss of ecosystem services such as fisheries, shoreline protection, tourism attractiveness, and opportunities for biodiscovery.

Agardy is deeply concerned that the policy and management response is simplistic and formulaic: map, monitor, and protect 20 percent of reefs in marine protected areas. She warns of the futility of developing configurations of protected areas on a basis of expedience rather than a proper regional diagnosis and a prescription of what is needed to achieve specified goals. Diagnosis and prescription may perhaps be guided by maps and monitoring–the sorts of things facilitated by the U.S. Coral Reef Task Force. But they are by no means a sufficient basis for prescribing solutions, and hence Agardi’s call for the much broader range of perspectives that could be obtained by stronger engagement with scientists from universities and international institutions. Hence also the call for more buy-in by the private sector and better partnering with environmental organizations to take advantage of their proven abilities in public outreach.

Agardy’s article captures some critical aspects of the natural history of coral reefs that are not widely appreciated. Acre by acre, coral reefs are not all that productive in terms of usable protein for human consumption. The much-exploited predatory reef fishes that bring good prices as fillets, and astronomical prices alive, feed at the end of long food chains. In terms of production per acre, human consumption of reef fishes is extremely inefficient–akin to harvesting wolves instead of sheep or cattle. And the effect of each lost pound of top predator is magnified many times into ecological imbalances lower in the food chain. The solution of harvesting the grazing fishes themselves has been shown many times over to be catastrophic. Like pastures reverting to weeds when deprived of sheep or cattle, prime coral areas depleted of grazing reef species can revert to seaweed-choked piles of rubble­no longer the complex, rigid, three-dimensional matrix that so delights the human eye and so profusely supports the complex life of the reef.

Agardy sidesteps somewhat the somber implications of global climate change for coral reefs, and the potential benefits for reefs of the mitigation of greenhouse gas emissions. But she hits the mark in stressing the urgent need to address the foundations of ecological resilience in coral reefs. Hard as it may be, society must learn to value the functional roles of fishes on coral reefs at least as highly as we do the coral reef fish on the plate. Surely we’re smart enough to have our fish and eat it too. Let’s go for it!

TERRY DONE

Conservation and Biology Group

Australian Institute of Marine Science

Cape Ferguson, Australia

[email protected]


Tundy Agardy writes eloquently of the continued degradation of American coral reefs. Despite the formation of the U.S. Coral Reef Task Force and the development of the National Action Plan, too little is being done. There are bright spots: Some of the reefs in the Florida Keys National Marine Sanctuary are now protected from the impacts of fishing, and efforts are being made to improve water quality impaired by decades of sewage and nonpoint source pollution. The spectacular and mostly pristine reefs of the Northwestern Hawaiian Islands (NWHI), which make up the majority of reefs under U.S. jurisdiction, were first protected by President Theodore Roosevelt in 1909. In 2000, with support from native Hawaiian cultural practitioners, fishers, scientists and environmentalists, these fragile ecosystems were afforded some further protections by a Clinton-era Executive Order. However, the lack of a sustained, consistent, conservation-oriented national policy on coral reefs poses a grave danger to these rich ocean nurseries and sources of food, livelihood, and delight.

Agardy suggests that the scientific community has not been sufficiently engaged in the work of raising public awareness of the plight of reefs, nor in the work of analyzing threats and devising effective solutions. Although greater engagement of scientists will enhance coral reef conservation, this shortcoming does not appear to be the major factor limiting an effective U.S. response to our coral reef crisis. In some cases, scientific consensus has been achieved and articulated. In other cases, the threats are obvious. When the abundance of exploited species is much higher in no-fishing areas than on commercial fishing grounds with similar habitat, as has been documented in dozens of scientific studies, it’s clear that fishing is depleting these populations. Similarly, when watersheds are poorly managed and reef waters are discolored with sediment and overenriched with nutrients, both the problems and the solutions are clear. Moreover, many actions that would benefit coral reefs the most would make sense even if they had no benefit for reefs. More sustainable fishing, farming, forestry, and land use practices have multiple economic, social, and environmental benefits in and of themselves.

When the problems and solutions are clear, yet no action is taken, the reason is usually a lack of political will. Consider the case of the NWHI. In 2003, more than 100 scientists met in Honolulu and identified threats to the NWHI. They also suggested management measures. But despite this scientific input and the urging of the NWHI Reserve Advisory Council, there are grave concerns about the adequacy of the political process to translate that input into robust protections.

Participation by scientists is vital. But the stewardship of national treasures comes down to political decisions. Yes, let’s encourage more scientific participation, but let’s also cultivate courage in our political leaders and keep up the pressure on the officials to whom we entrust the care of our coral reefs. This combined approach is the best path to the conservation-oriented national policy our coral reefs so vitally need.

ROD FUJITA

Marine Ecologist

Environmental Defense

Oakland, California

[email protected]


Tundi Agardy sets forth a now familiar litany about the decline of global coral reefs. In my opinion, the reasons for the decline are not, as she suggests, “mysterious” but reasonably well understood. I like to call them the Big Three: (1) overfishing, (2) land-based pollution, and (3) global climate change. It is also understood that these human disturbances act in synergy and that future conservation and sustainable use of coral reef resources will require comprehensive management attention. Although coral reefs are enormously biologically diverse and their responses to disturbances are complex, it is important to remember that the Big Three have damaged nearly all coastal ecosystems in the world. Coral reefs, with their high profile and importance to the public, can be the “poster child” for coastal management solutions that will have general applications.

Clearly, there has been incomplete understanding of the problems, lack of understanding and application of the available science, lack of government coordination, and agency infighting. Nevertheless, there have been landmark coral reef management plans such as the 1997 management plan for the Florida Keys National Marine Sanctuary and the 2000 National Action Plan to Conserve Coral Reefs of the U.S. Coral Reef Task Force. These were inspired by the pioneering 1979 management plan for the Great Barrier Reef Marine Park in Australia. Yet in spite of this attention, coral reefs have continued to decline. The reason is that the plan itself became the goal rather than implementation of the plan to reverse the trend of decline of coral reefs.

What can be done? Agardy suggests that we should hold our course, only do it better, particularly better integration of science and new technologies. I think that a new approach is needed. We must take as the management unit the entire Exclusive Economic Zone (from the shoreline to 200 nautical miles) surrounding each of our coral reefs. There is ample science supporting large ocean area management. Within these units–call them coral reef ecoregions–we can apply geographic information technologies to organize in accessible formats the huge amount of data and information that we already have. This will allow stakeholder visualization of the extent of the problems and the potential solutions, particularly the need for zoning to separate conflicting human activities, including recreational and commercial fishing, population centers, tourism, ship corridors, and critical conservation areas, to name a few.

By using this familiar land-use planning approach, the costs of the inevitable loss of freedom of access to the ocean are shared by all the stakeholders rather than, for example, a few disenfranchised fishers. Unlike land-use planning, seascape or ocean-use planning can be implemented on a trial basis, with the plan adjusted over time as new information becomes available. For example, after more than 20 years of management and monitoring, Australia recently rezoned the Great Barrier Reef Marine Park with international stakeholder participation, including increasing the area of fully protected marine reserves from less than 5 percent to 33 percent of the park.

Our coral reefs, no less than our coastal oceans, need comprehensive planning within ocean ecoregions in order to sustain future use. We have been nattering on, wringing our hands about coral reefs for almost two decades while their decline has deepened. We know what we must do and we must act quickly. If we don’t, future generations will be denied the demonstration of the power and beauty of nature and the lifting of the spirit that coral reefs so uniquely provide.

JOHN C. OGDEN

Director

Florida Institute of Oceanography

St. Petersburg, Florida

[email protected]

From the Hill – Spring 2004

Defense, homeland security dominate Bush’s FY 2005 R&D budget

Less than two weeks after Congress finally completed its work on the fiscal year (FY) 2004 budget, President Bush on February 2 released his FY 2005 budget proposal. The president would increase total federal R&D spending to $132 billion–$5.5 billion or 4.3 percent more than FY 2004–but, in a repeat of the past couple of years, most of the increase would go for R&D spending in the Department of Defense (DOD) and the Department of Homeland Security (DHS) (see table).

Funding for all other R&D agencies would continue to stagnate or decline, with increases in some agencies offset by steep cuts in others. Even two favored nondefense R&D agencies would have to adjust to diminished expectations. The National Institutes of Health (NIH) budget, which was recently doubled over a five-year period, would rise only 2.6 percent. The National Science Foundation (NSF) budget would increase 3.6 percent, but that would leave the agency well short of the money promised when a law to double its budget over five years was enacted in 2002.

The federal investment in research (basic and applied) would edge up by just $22 million to $55.7 billion. Basic research would increase by 1 percent or $258 million to $26.8 billion. Excluding the NIH budget, however, basic research would actually decline 1.9 percent.

Defense R&D, which includes DOD, the Department of Energy’s (DOE’s) defense activities, and defense-related activities in DHS would total $74.7 billion, or 57 percent of the total federal R&D portfolio. Just a few years ago, defense and nondefense spending were roughly equal. Federal homeland security R&D spending, which cuts across a dozen federal agencies, would total $4.2 billion in FY 2005, a 15.9 percent or $575 million increase.

Budget details

DOD’s R&D budget would grow to $69.9 billion, an increase of $4 billion or 6 percent. The big winner, again, would be the Missile Defense Agency with a 20 percent increase to $9.1 billion, in preparation for deployment of a missile defense system beginning in 2004. By contrast, once again the Pentagon proposes to cut basic and applied research funding. Basic research would fall 5.3 percent to $1.3 billion and applied research would decline 12.3 percent to $3.9 billion. DOD’s Science and Technology account would fall an even steeper 15.5 percent to $10.6 billion. The budget for the Defense Advanced Research Projects Agency, however, would increase to $3.1 billion, up 9.1 percent.

NIH R&D would rise to $27.9 billion in FY 2005, with most institutes receiving increases between 2.8 and 3.3 percent. There would be no clear favorites, unlike the past two years when biodefense research was heavily favored. The largest percentage increase, 10 percent, would go to the Office of the Director because of $60 million in new money for the NIH Roadmap for Biomedical Research, NIH Director Elias Zerhouni’s initiative to reinvigorate NIH’s clinical research, interdisciplinary research, and new research tools.

NIH again proposes to discontinue an extramural construction program in the National Center for Research Resources (NCRR), leaving NCRR the only NIH institute to see its budget decline (down 7.2 percent to $1.1 billion). The total number of research project grants would increase by 1.4 percent, and the number of new grants would rise slightly but only to the FY 2003 level after falling in 2004. The average grant size would rise 1.3 percent, well below the 3.5 percent expected inflation rate for biomedical research. The size of the average new grant would actually fall in 2005, and the proposal success rate would fall to 27 percent, down from 30 percent last year.

NSF R&D would increase to $4.2 billion. The Major Research Equipment and Facilities Construction account would enjoy a sizeable increase, from $155 million to $213 million because of three new starts. The small increases for the research directorates would squeeze NSF funding of competitively awarded research grants. The total number of NSF research grants would fall to 1,645 in FY 2005, and NSF expects to make awards to fewer than one in four applicants this year and next year, making the competition fiercer.

Despite the president’s recently announced plans to return humans to the moon and subsequently go to Mars, most of the National Aeronautics and Space Administration’s (NASA’s) FY 2005 budget would go to near-term projects. The NASA budget would increase by 5.6 percent to $16.2 billion, but nearly all of the increase would go to returning the non-R&D Space Shuttle to flight (up 9.5 percent to $4.3 billion) and resuming construction of the Space Station (up 24.3 percent to $1.9 billion). NASA R&D would increase by 3.9 percent to $11.3 billion, but basic and applied research funding would actually decline (down 3.3 percent). In Space Science (up 4.2 percent to $4.1 billion), there would be large increases for Mars Exploration (up 16.1 percent to $691 million) to build the next generation of robotic explorers as well as a new Lunar Exploration account of $70 million to begin preparations for the return to the moon. Other NASA research efforts would decline steeply, including investments in earth science, aeronautics research, and physical sciences research. In technology development for the moon and Mars missions, NASA would close out the Space Launch Initiative and its projects such as the Orbital Space Plane and begin work on developing a Crew Exploration Vehicle.

R&D funding in the Department of Energy (DOE) would increase 1.3 percent to $8.9 billion. The entire increase and more would go to the Radioactive Waste Management program for a tripling of R&D activities in support of the Yucca Mountain nuclear waste disposal site. The $275 million R&D investment (up from $69 million) would depend on congressional approval of a new source of dedicated revenues. On the nondefense side, funding for most other DOE R&D programs would fall. R&D funding for the Office of Science (OS) would decline by 0.4 percent to $3.2 billion, marked by flat funding for core R&D programs and the proposed elimination of FY 2004 R&D earmarks. The few increases within the OS portfolio would be focused on nanoscale science and hydrogen research. Within Fusion Energy Sciences, funding for the International Thermonuclear Experimental Reactor (ITER) project would ramp up, requiring offsetting cuts in domestic fusion research. Overall funding for energy R&D (excluding the Yucca Mountain project) would decline steeply in FY 2005. Proposed increases for hydrogen R&D ($228 million, up from $159 million) would be offset once again with steep cuts in other renewable energy research. Increases for coal R&D would be offset with steep cuts in other fossil fuels R&D, and increases in fuel cell technologies would be offset with cuts in energy conservation R&D. DOE’s defense R&D programs would increase 2.1 percent to $4.3 billion, including increases for advanced scientific computing (up 6.6 percent) and a quadrupling of research efforts on the controversial Robust Nuclear Earth Penetrator ($28 million, up from $7 million).

The DHS R&D portfolio would increase 15.5 percent to $1.2 billion. DHS research would nearly triple from $171 million to $431 million as DHS begins to focus on developing a long-term knowledge base instead of immediate technology needs. The majority of DHS R&D would continue to be in the new Directorate of Science and Technology with an R&D budget of $987 million in FY 2005. In addition, DHS has $885 million in FY 2004 and $2.8 billion in FY 2005 in already appropriated funds for the non-R&D Project Bioshield, which is designed to develop biodefense countermeasures. The FY 2005 DHS R&D portfolio would also be heavily oriented toward biodefense. The proposed $407 million in spending would make it the largest program area in the portfolio.

R&D in the U.S. Department of Agriculture (USDA) would fall $76 million or 3.4 percent to $2.2 billion. USDA proposes to eliminate $220 million in FY 2004 R&D earmarks and to hold other funding flat overall. Research funding would take a big hit, however, in order to provide $178 million to complete animal research and diagnostic facilities at the National Centers for Animal Health in Ames, Iowa, which would be the heart of a USDA-wide food and biosafety initiative. There would also be $37 million in new intramural research on food and agriculture defense. Extramural research funding would fall sharply, mostly because of the proposed elimination of earmarks, but funding for the competitively awarded National Research Initiative grants would climb 10 percent to $180 million.

Once again, the administration proposes to eliminate the Advanced Technology Program at the Department of Commerce, which was funded at $171 million in FY 2004. The savings would allow for a 30 percent boost for intramural research at the National Institute of Standards and Technology (NIST) laboratories. The budget would also keep funding for the non-R&D Manufacturing Extension Partnership at NIST at $39 million, well below the $106 million level of last year and previous years. National Oceanic and Atmospheric Administration (NOAA) R&D would decline by 3.3 percent to $611 million. The Oceanic and Atmospheric Research (OAR) account in NOAA would fall 11 percent in FY 2005 because of proposed elimination of earmarks and reductions in core OAR programs.

R&D in the Department of the Interior would fall 4 percent to $648 million, with a similar cut to $525 million in Interior’s lead science agency, the U.S. Geological Survey (USGS). There would be cuts to nearly every USGS program.

The Environmental Protection Agency (EPA) R&D budget would fall 7.1 percent or $44 million to $572 million. Although some of the cuts would be due to the proposed elimination of R&D earmarks, funding for many R&D programs would also decline. The extramural Science to Achieve Results ★ program would see its budget fall to $65 million, a steep drop from the $100-million annual funding levels of the past several years. EPA would eliminate STAR grants entirely in four research areas. EPA’s overall budget would fall 6.9 percent to $7.8 billion, with particularly steep cuts to state and tribal assistance grants and the Science and Technology program.

FY 2004 recap

Work on the FY 2004 budget was finally completed on January 23 when President Bush signed an omnibus appropriations bill that provided funding for eight of the 12 largest R&D funding agencies. (The omnibus bill also included a 0.59 percent across-the-board cut for all non-DOD appropriations, even for agencies whose budgets had already been signed into law.) Overall for FY 2004, the federal investment in R&D will increase to $127 billion, up $8.4 billion or 7.1 percent, and $4.6 billion more than the Bush administration requested.

DOD, DHS, and NIH will receive 93 percent of the R&D funding increase. DOD alone will receive 80 percent of the increase, with its budget reaching $66 billion, a boost of 11.2 percent or $6.7 billion. The new DHS R&D budget was set at $1.1 billion, a 43 percent or $316-million increase. After five years of annual 15 percent increases, NIH will receive just a 3.1 percent or $822-million increase to $27.2 billion.

Although the FY 2004 budget includes record increases for defense and international discretionary spending related to U.S. activities in Iraq and Afghanistan, domestic R&D spending increases were severely restrained. Increases in some R&D funding agencies were offset by flat funding or cuts in others.

After declining in FY 2003, earmarks made a dramatic return, rising 32 percent to $1.9 billion. Nearly all of the increase stems from a near doubling, to $825 million, of earmarks in the DOD budget. Earmarks in DOE R&D would also double to $284 million. USDA would receive $220 million in earmarked money and NASA would receive $194 million.

The pressure on individual members of Congress to push for earmarks will be intense during the FY 2005 budget process, particularly because the president has proposed holding domestic discretionary spending to just a 0.5 percent increase while continuing to invest heavily in defense and homeland security programs. Thus, the budgets of many domestic agencies would be cut. In such an environment, providing additional funds for nondefense R&D projects would involve taking money from other areas that are already proposed for cuts. For these reasons, House Budget Committee Chairman Jim Nussle (R-Iowa) has threatened to push for legislative language in all appropriations bill calling for a one-year moratorium on all earmarks.



“From the Hill” is prepared by the Center for Science, Technology, and Congress at the American Association for the Advancement of Science (www.aaas.org/spp) in Washington, D.C., and is based on articles from the center’s bulletin Science & Technology in Congress.

Deterring Nuclear Terrorism

Contrary to popular belief, with a little technological innovation, deterrence can become a useful strategy against terrorist use of nuclear weapons.

Has terrorism made deterrence obsolete? President Bush articulated the prevailing view in his June 2002 West Point address: “Deterrence–the promise of massive retaliation against nations–means nothing against shadowy terrorist networks with no nation or citizens to defend. Containment is not possible when unbalanced dictators with weapons of mass destruction can deliver those weapons on missiles or secretly provide them to terrorist allies.” Debate over missile defense aside, U.S. foreign policy thinkers have largely accepted his reasoning, though they argue on the margins over how unbalanced most dictators are.

Yet in confronting the prospect of nuclear terrorism–and there is no more dire threat facing America today–this logic is flawed. Its purported truth in addressing nuclear terror relies almost entirely on its assumption that rogue states could provide nuclear weapons “secretly” to terrorists. But were such now-secret links to be exposed, deterrence could largely be restored. The United States would threaten unacceptable retaliation were a state to provide the seeds of a terrorist nuclear attack; unable to use terrorists for clandestine delivery, rogue states would be returned to the grim reality of massive retaliation.

Most policymakers have assumed that exposing such links would be impossible. It is not. Building on scientific techniques developed during the Cold War, the United States stands a good chance of developing the tools needed to attribute terrorist nuclear attacks to their state sponsors. If it can put those tools in place and let its enemies know of their existence, deterrence could become one of the most valuable tools in the war on terror.

Terrorists cannot build nuclear weapons without first acquiring fissile materials–plutonium or highly enriched uranium–from a state source. They might steal materials from poorly secured stockpiles in the former Soviet Union, but with the right investment in cooperative threat reduction, that possibility can be precluded. Alternatively, they could acquire fissile materials from a sympathetic, or desperate, state source. North Korea presented this threat most acutely when it threatened in May 2003 to sell plutonium to the highest bidder.

The Bush administration appears to be acutely aware of such a possibility and is trying to prevent it by fighting state-based nuclear proliferation and by attempting to eliminate terrorist groups. Yet it has taken few effective steps to break direct connections between terrorists and nuclear rogues. The elimination of terrorist networks and prevention of nuclear proliferation should be top goals, but a robust policy cannot be predicated on assuming universal success in those two endeavors.

Two basic lines of attack might help break any connection. In the one currently favored by the administration, militaries attempt to break the terrorist/state link physically by focusing on interdiction of nuclear weapons transfers. But the technical barriers to such a strategy’s success are high. A grapefruit-sized ball of plutonium or a cantaloupe’s worth of highly enriched uranium is enough for a crude nuclear weapon that would flatten much of a city, and detecting such a shipment would be extremely difficult. Like missile defense, interdiction is a useful tool in preventing nuclear attack, but also like missile defense, it is far from sufficient in itself. In confronting the threat of missile attack, the United States ultimately relies on deterrence, threatening any would-be attacker with unacceptable punishment. It will need the same tool to prevent nuclear terrorism.

This, of course, begs a question: If nuclear materials are so hard to detect, how can state/terrorist connections be exposed? Solving this problem requires a novel and somewhat unsettling twist. Instead of simply focusing on intercepting bombs, we must learn to identify a nuclear weapon’s origin after it has exploded, by examining its residue. If the United States can take that technical step, it can credibly assure its enemies that their transfer of weapons to terrorists will ultimately lead to their demise.

At first glance, such a strategy might appear foolish: It would provide little comfort to identify an attack’s perpetrator after a U.S. city has already been destroyed. Adopting this criticism, though, would miss the essence of deterrence. During the Cold War, U.S. deterrence was based firmly in its ability to retaliate after a devastating Soviet attack. This by no means suggested that such an attack was acceptable or that retaliation would provide comfort. Instead, what was important was the threat’s ability to discourage any attack from occurring in the first place. Similarly, deterring nuclear terror by threatening its would-be sponsors would be aimed at using retribution not as an end but as a means to prevent attacks.

Finding the source

Finding a successful deterrence strategy requires that we make retaliatory action as certain as possible; there must be little room for the adversary to gamble that it might transfer nuclear weapons without suffering. Ideally, the United States would identify nuclear transfers when they occurred and punish the participants accordingly. However, the difficulty of intercepting nuclear transfers might embolden enemies to attempt to evade such a system. Moreover, enemies might believe that even if a transfer were detected, the United States would lack the resolve to punish them. Pyongyang, for example, with more than 10,000 artillery pieces poised for counterattack against Seoul, might conclude that the United States would not follow through on its retaliatory threats were it to intercept a North Korean bomb that had not yet been detonated.

Focusing on actual attacks rather than on transfers would solve both of these problems. Few doubt the U.S. resolve to retaliate were a nuclear bomb to be detonated in a U.S. city. And unlike shadowy transfers of nuclear material, a nuclear attack would surely be noticed.

The missing link, which scientists must provide, is the ability to attribute a nuclear weapon to its state source after an attack. On its face, this might appear impossible–during a nuclear detonation, the weapon’s fissile core of plutonium or uranium would be vaporized and transmuted, flung outward with the force of 20,000 tons of TNT. And yet, surprisingly, such a cataclysmic event would still leave behind traces from which the original bomb’s characteristics might be reconstructed.

Already, scientists at the nation’s three principal nuclear weapons laboratories are working on the problem. They have decades of experience to build on. Before 1963, when the world ceased testing nuclear weapons in the atmosphere, the United States developed techniques to infer details of Soviet bombs by examining their fallout, which they could detect from far away. By positing a range of possible bomb designs, technicians could infer details about the fissile materials–plutonium or uranium–used in the Soviet bombs, along with some of the weapons’ design details. (Presumably, the Soviets did the same to spy on the United States; thus, the two countries might cooperate to further develop attribution abilities.) Some of that expertise is still maintained, particularly in the conjunction with the Nuclear Emergency Search Teams, whose task is to respond to nuclear terrorist incidents. Building on that foundation will require training a new generation of scientists in forensic techniques that were abandoned long ago. It will also require an effort by laboratory scientists to imagine weapon designs that terrorists or rogues might use. (Such designs could be simulated using the Department of Energy’s Advanced Supercomputing Initiative and would not require nuclear testing to validate.) It would be wise to pursue much of this in a limited multilateral environment, thus helping reassure the world that our attributions are sound and unbiased.

By itself, however, the ability to infer a bomb’s composition will not be enough. To successfully attribute an attack, there must be a state fingerprint to match it to. Knowing any characteristics of enemy weapons will be useful, but it will be particularly helpful to know the finer details of others’ plutonium and uranium. Those two elements come in various isotopes, and a given sample of either metal will combine several of those isotopes in hard-to-alter combinations. To some degree, one can infer those characteristics from the design details of the enemy’s production facilities and from the operating histories of its plants. In other cases, such as in Korea in the 1990s, special access will make it possible to measure the composition of a country’s uranium or plutonium. If the isotopic details of a weapon are known, attributing it will be much easier.

Insofar as deterrence itself is morally acceptable, the threat and act of retaliation against an enemy for leaking nuclear materials, whether intentional or otherwise, are moral too.

It may be possible to go further by exploiting states’ interest in not being wrongly identified as having originated a nuclear attack. In conjunction with strengthened International Atomic Energy Agency safeguards, states could be required to submit detailed isotopic data on the nuclear materials they produce and to submit to the data’s verification. If such states had pure intentions, this would help exclude them from blame were a future terrorist attack to occur; were their motives more suspect, this would provide the world a hedge against their future breakout. So far, states have been loath to take such actions, as they could require compromising sensitive military and commercial data. But the tradeoffs in confronting terrorism–in particular, in the immediate aftermath of an attack–might prompt many to reconsider.

Ambiguous intent

The physical identification of bombs with their builders still leaves open the question of intent. Imagine that a bomb made of North Korean plutonium were detonated in Washington: Would it not be essential, some ask, that we know whether the plutonium had been provided to terrorists intentionally, rather than stolen against the regime’s wishes? In fact, it should not matter. Instead, in deciding whether it would be appropriate to retaliate for an attack, we must ask two questions: Is it morally acceptable to retaliate? And is it strategically wise?

Insofar as deterrence itself is morally acceptable (a controversial proposition in some circles, but one at least tacitly accepted in the strategies of all eight nuclear powers), the threat and act of retaliation against an enemy for leaking nuclear materials, whether intentional or otherwise, are moral too. With possession of nuclear weapons comes the responsibility for their control. If a state is unwilling to accept responsibility for the impact of any weapons it builds, it can choose not to build them. By foregoing that choice, it should be understood that the state takes responsibility for any impact the weapons have. To see that such a proposition is widely accepted, one need look no further than the Cold War, where deterrent threats made little or no distinction between intentional and accidental launches of Soviet or U.S. missiles.

The strategic wisdom of retaliation under ambiguous circumstances is another matter entirely. Against an attack originating from North Korea or Iran, whether intentional or not, there would be little for the United States to lose were it to retaliate. Since the result of the retaliation would likely be regime change, it would be effective in removing the nuclear threat. Ideally, that prospect would induce both regimes not only to refrain from exporting nuclear materials but also to secure their stockpiles.

In contrast, if an attack were to originate from loose Russian material, military retaliation would be unwise. It is currently inconceivable that such an attack would be intentional on Russia’s part, as Russia is not an enemy; moreover, retaliation would do little to prevent further leakage of Russian material and indeed might provoke Russian retaliation in kind. The precedent for such an approach is also found in the changed U.S. attitude toward accidental missile launch since the Cold War. Does anyone believe that it would be strategically wise for the United States to retaliate militarily against an (improbable) accidental launch of a Russian missile?

Perhaps the toughest case is Pakistan, currently an ally in the war on terrorism. Few U.S. policymakers are confident that Pakistan’s nuclear arsenal is entirely secure, making weapons theft by terrorists a distinct possibility. At the same time, many doubt the sincerity of Pakistan’s cooperation with the United States, and given its past sales of nuclear equipment to North Korea, Iran, and Libya, there would likely be doubts as to whether nuclear material leaked from Pakistan was proliferated intentionally or was stolen. U.S. policy toward Pakistan on this question will likely depend on how the broader U.S.-Pakistani relationship evolves. President Bush’s national security team needs to debate now how it would respond to a leak of Pakistani nuclear material. If it concludes that it will hold the Pakistani regime responsible for any nuclear leaks, it should communicate its decisions clearly, though quietly, to the Pakistani leadership. At the same time, it should offer to help Pakistan secure its arsenal against theft.

Last year, a National Research Council panel, in addressing the threat of nuclear terrorism, reported that, “The technology for developing the needed attribution capability exists but has to be assembled.” It noted that an effort to complete that work is under way in the Pentagon’s Defense Threat Reduction Agency, but that it is not expected to be complete for several years. If attribution is construed merely as something useful after an attack, perhaps to provide evidence in prosecuting the offenders, it makes sense for it to take a back seat to urgent efforts such as securing ports and improving surveillance. Attribution, however, has the potential to be far more powerful. Coupled with the right threats, it can prevent terrorist attacks in the first place. The scientific effort must be accelerated, and declaratory policy must be modified to match.


Michael Levi ([email protected]), a physicist, is the science and technology policy fellow in foreign policy studies at the Brookings Institution, Washington, D.C.

Stronger Measures Needed to Prevent Proliferation

An updated Atoms for Peace program is needed to help solve problems of national and international security brought about by increased civilian use of nuclear energy.

Fifty years ago, U.S. President Dwight Eisenhower unveiled the Atoms for Peace program. In a widely noted speech to the United Nations (UN), he called on the United States and other nations to “make joint contributions from stockpiles of normal uranium and fissionable materials to an international atomic energy agency” that likely would operate under the aegis of the UN. This agency would be responsible for securing and protecting the accumulated materials. But more important, the materials “would be allocated to serve the peaceful pursuits of mankind. Experts would be mobilized to apply atomic energy to the needs of agriculture, medicine, and other peaceful activities. A special purpose would be to provide abundant electrical energy in the power-starved areas of the world. Thus contributing powers would be dedicating some of their strength to serve the needs rather than the fears of mankind.” The United States, Eisenhower declared, “would be more than willing–it would be proud–to take up with others ‘principally involved’ the development of plans whereby such peaceful use of atomic energy would be expedited.”

Much of this vision has been realized; and, most people would say, to the benefit of humankind. The Atoms for Peace program helped foster the early use of civilian nuclear technologies. The program also kept the United States in the center of efforts aimed at safeguarding such technologies. Today, most of the civilian nuclear material in the world requires U.S. permission to be exported. There has been no known diversion of nuclear material from safeguarded nuclear power plants. The International Atomic Energy Agency (IAEA) continues to be an essential tool for monitoring civilian activities. The Iraq and North Korean experiences also show the IAEA to be effective at detecting nuclear weapons-related activities (or, in the recent case of Iraq, the apparent lack of such activities), especially if the agency is backed up by aerial or space-based intelligence activities telling it where to look. These developments were not foregone conclusions at the time of Eisenhower’s speech in 1953 and, to that extent, much of the original vision has come to fruition.

Nevertheless, much of the vision remains to be realized, and what remains to be done leads to the heart of today’s problems in the nuclear world. Early on, Atoms for Peace activities unwittingly assisted nuclear weapons programs in India and Israel (although these programs would have succeeded eventually in any case). More recently, Pakistan, North Korea, Iran, Libya, and perhaps other countries have formed a clandestine network for the exchange of nuclear and missile technologies.

There now exist worldwide thousands of nuclear weapons’ worth of nuclear material in the form of separated plutonium and highly enriched uranium (HEU) that can easily be used in the construction of other nuclear weapons. Much of this material does not come from civilian programs and is not under international safeguards. Indeed, it is not under very good security of any kind. Some governments have connections with terrorists or are unable to prevent terrorist activities. To make matters more urgent, given the predicted two- to fourfold expansion of electrical power production over the next 50 years, nuclear power may well continue to expand in less developed countries, whether the United States approves or not. Providing security under these circumstances is at the heart of what an updated Atoms for Peace program must do this century.

The director general of the IAEA, Mohamed ElBaradei, recently noted that the “margin of security” under the world’s current nonproliferation regime “is becoming too slim for comfort.” He could equally well have said the same of the margin of security against nuclear terrorism. The thinness of the barriers to prevent terrorists from acquiring or making a nuclear weapon has frightened nearly everyone who has looked at them. The main obstacle to improving the margin of security is not a lack of ideas or proposed programs. Rather, the obstacle is the insufficient priority given by the governments that are “principally involved,” to use Eisenhower’s phrase, to implementing a worldwide program with the thrust and durability of the original Atoms for Peace program. Such a program would of necessity cover dangerous materials whatever their provenance–civilian or military, including dual-use facilities–and would provide a sustainable means to deal with governments that, wittingly or not, may aid terrorists.

Plans for improvement

The components of an updated security-conscious Atoms for Peace program fall into three general categories: materials control and facilities monitoring, effective international governance, and reduction of the demand for nuclear weapons. In some areas, there already may be sufficient agreement for governments to act jointly and give the programs the priority they need. In other areas, there is not. In still others, given the international situation, an effective solution remains elusive and only partial steps can now be attempted.

In the first category, seven measures would greatly increase effective control of the most dangerous nuclear materials: separated plutonium and HEU. Most of these measures have been widely discussed in the past decades. The first four measures are relatively specific, and some of them are under way:

Fulfill the pledges of the Group of Eight. At the group’s 2002 summit, seven of its member nations pledged to provide Russia (the remaining member) with $20 billion over 10 years to help prevent terrorists from obtaining weapons of mass destruction. The United States pledged to provide $10 billion of the total. Surplus nuclear weapons materials (several hundred tons of plutonium and HEU, mainly but not solely in the former Soviet Union and the United States) are probably the most urgent problem. This program has moved very slowly. “Only a tiny fraction has been spent or even allocated,” according to a recent report from the Nuclear Threat Initiative, a private group that monitors various global threats. No general mechanism has yet been developed for either the distribution or receipt of the money pledged. Individual countries now work out their own bilateral programs. The main program, that of the United States, currently is mired in a dispute regarding the extent to which U.S. corporations and scientists will be shielded from liability in the case of accidents occurring as a result of the program in Russia. According to Sam Nunn, a former U.S. senator from Georgia and one of the program’s earliest leaders, at the present rate of progress it will be 20 years before these materials are adequately secured.

Phase out the use of HEU in research reactors. Worldwide, 650 research reactors are known to have been built. Of these, 283 remain operational in 58 countries (with 85 operating in 40 developing countries), 258 are shut down, and 109 have been decommissioned. Approximately 135 of the operating reactors (in 40 countries) use HEU, which is defined as uranium that has been enriched so that at least 20 percent of its composition is U235, the isotopic form of uranium that can be induced to fission and hence is suited not only for use in nuclear reactors but as a material in nuclear weapons. Of the reactors that use HEU, about 60 either obtained their uranium from the United States or had their fuel enriched in the United States. For a number of years, the United States has been conducting a program to convert the U.S.-supplied reactors to low-enriched uranium (LEU), which cannot be used to make weapons. Through this Reduced Enrichment for Research and Test Reactors (RERTR) program, about half of the reactors have been so converted, with the United States generally taking back the HEU. To speed up progress, the RERTR program needs higher priority. In addition, other countries, such as Russia and France, have not been so diligent to date, and some countries where the reactors are located have not been as cooperative as others.

Coherent and consistent leadership from the United States and other states is essential if the programs needed are to go forward with adequate speed.

Implement a protocol to improve the physical security of weapons-usable material. There are no IAEA safeguards standards in force for the physical protection of nuclear materials. George Bunn, former general counsel of the U.S. Arms Control and Disarmament Agency and now a consulting professor at Stanford University, notes that “IAEA safeguards deter the country where the material is located from diverting it because the diversion will be discovered by accounting and inspection, but they are only marginally relevant to thieves or terrorists. The relevant treaty, the Convention on Physical Protection of Nuclear Material, applies to such material only in international transport, not to its use or storage in the territory of its home country.” Attempts to amend this treaty to make it more effective have been under way since 1998, with partial agreement being reached in 2003 on substantial portions of text. “However, the draft does not establish specific standards for protection because the negotiators are afraid to make the standards known to terrorists and because the negotiators don’t want a treaty” to govern internal security measures, Bunn says. “The pertinent IAEA standards are only recommendations adopted in 1999–before September 11.” Lack of progress in this area is due, in large measure, to different approaches to implementing physical security in different countries. For example, armed guards are routinely used in some countries, including the United States, but are banned in other countries; and advanced electronic barriers are not available everywhere. Physical security is a sensitive area in most countries, so that the protocol will not move forward rapidly without (and perhaps even with) effective U.S. leadership.

Make implementing the Additional Protocol of the IAEA a high priority and allow sensitive exports only to states complying with it. This protocol provides for more rigorous monitoring of facilities. It has been worked out among the state members of the IAEA over the past decade or more, and it is being implemented on a trial basis in a few of these states. Most recently, Iran has agreed to its implementation. But implementation remains far from ideal. Still, there is considerable backing for the protocol, and it may be that with U.S. leadership, most countries will agree to implementation, though perhaps slowly, at best. Although not the last word in facilities monitoring, the protocol nevertheless represents a significant improvement over past practices. In particular, it provides for two key measures. One measure gives the IAEA a right to request complementary access, on two hours’ notice, to additional facilities not originally included in routine inspections. The other permits environmental monitoring near an inspection site and, with the permission of the country involved, anywhere else. This latter provision makes it easier for IAEA inspectors to justify asking for environmental inspections anywhere and puts pressure on the inspected country to justify any refusal. These provisions would make concealment of a clandestine nuclear program much harder to maintain. Proving a negative is always difficult, however. As David Donohue of the IAEA Safeguards Analytical Laboratory has noted, “Solving the problem of verifying the absence of undeclared nuclear facilities requires tools that can give high evidence of detecting the presence of such facilities.” These tools would include sensors, such as cameras and radiochemical detectors, both on and off site; secure communication of sensor data in real time; and prompt no-notice inspections. Such tools would be in addition to measures that theoretically could be taken under the present protocols but have not been fully developed or installed, such as the use of portal monitors, emission sensors to provide facility data from operating reactors, electrical power monitors, and specialized monitors to indicate reactor performance between inspections. The technical requirements are thus quite high, and significant ongoing investments are needed by reactor operators and by the IAEA.

The three additional measures for increasing control of the most dangerous nuclear materials represent departures from the present system. These measures would require major investments, both economic and political, and are much more controversial:

Minimize accumulation of weapons-usable material, if necessary by using a new fuel cycle. Several tons of separated plutonium from the civilian fuel cycle now exist in Japan, Russia, and Western Europe, and hundreds of tons of excess plutonium and highly enriched uranium have been generated as part of the production of nuclear weapons. Reducing the HEU is technically straightforward: HEU can be “blended down” into LEU and the product then used as fuel in operating nuclear power plants. In practice, however, this effort has been held up several times by disagreements among the U.S. Enrichment Corporation (USEC), which is responsible for the job, the U.S. Department of Energy, Congress, and private interests, and has been proceeding slowly. The root causes are that the demand for nuclear fuel is limited, and the USEC, as a supplier to U.S. users, has an interest in minimizing the flow of foreign uranium into the United States.

Reducing the stock of plutonium is more complex. Plutonium can be partially burned by combining it with uranium and introducing the mixed uranium-plutonium oxide (MOX) fuel into existing reactors. The leftover plutonium, now changed chemically and mixed in with highly radioactive spent fuel, is less available and desirable for weapons use. This method already is used in France, and a preliminary MOX program is under way in the United States. Plutonium could be burned more completely in a new generation of fast neutron spectrum reactors, but this approach would take longer and cost more, given the development, licensing, and construction time needed to install such reactors. Indeed, both of these methods entail higher costs than the present methods of fueling nuclear reactors, and therefore they will have to be subsidized by the government. Still another method for disposing of plutonium is to immobilize it in a stable matrix and then bury that material underground. Work on this approach is under way, but again it is going slowly because of costs, perceived environmental problems, and questions about the procedure’s effectiveness in providing a barrier against later plutonium separation. The actual degree of security that this method provides and the economic consequences of moving to production-scale activities remain significant unknowns. Answering such questions may take decades. In addition, permanent disposal is unattractive to some individuals and governments, particularly in Russia, who see excess plutonium as a resource for the future.

If nuclear reactors and fuel-cycle activities spread more broadly in the world, then a fuel cycle that minimizes the accumulation of weapons-usable material will increasingly be viewed as necessary for security. This effort is held hostage to the debate, almost theological in nature, between adherents of the once-through cycle and those of reprocessing. Each side quotes economic and environmental arguments. In fact, the economic differences are well within the uncertainties of the estimates, as are the environmental differences. Thus, a clear choice remains elusive. It is clear, however, that secure fuel cycles, with and without reprocessing, need to be developed. Preliminary work has been done on such cycles, notably by Argonne National Laboratory.

Establish internationally available storage sites for nuclear materials. Since Eisenhower’s initial proposal, many suggestions have been made for putting nuclear materials not actually being used, whatever the source and composition, under international monitoring or control. The proposals have variously been driven by security, safety, and environmental considerations, and they have floundered owing to economic, political, and siting concerns. Such recommendations may now receive more attention, for security as well as political reasons. In several countries, the storage of spent fuel at utility sites (probably not the most secure form of storage) cannot be increased without incurring costs that utilities are unwilling to bear. Recently, the IAEA’s director general said, “We should consider multinational approaches to the management and disposal of spent fuel and radioactive waste . . . Considerable advantages–in cost, safety, security, and nonproliferation–would be gained from international cooperation in these stages of the fuel cycle.” This message has been expanded by a member of Japan’s Nuclear Safety Commission and professor of nuclear engineering at the University of Tokyo, Atsuyuki Suzuki, who said, “What I believe is more acceptable globally is to establish a multinational system where spent fuel is managed with more centralized and intensive international safeguards . . . It would generate a tremendous amount of benefits for many nations which intend to use nuclear energy for peaceful purposes only, because it would provide the most economical and flexible option for managing spent fuel not merely in terms of direct cost but also taking into account indirect cost associated with such externalities as security and environmental concern.”

The world’s governance and enforcement machinery must be updated and strengthened if it is to be equal to the challenges.

Place enrichment and separation facilities under international authority. This proposal, made most recently along with other measures along the lines suggested in this paper, by IAEA Director General Mohammed ElBaradei in the fall of 2003 and again in 2004, is the most controversial, and arguably the most important. It also is the one on which the least progress has been made. Enrichment and separation facilities already exist in at least a dozen countries. They involve both commercial and military secrets. Vested interests, including the owners of these facilities and the managers of other facilities who want assured fuel supplies at market prices, are considerable and not reconciled. The recent disagreement between Iran and the United States and other nations about Iran’s need for enrichment facilities is a case in point. The problem must be tackled, because these are the most sensitive facilities in the nuclear enterprise, aside from storage sites for weapons-usable materials. Control over enrichment facilities or plutonium separation facilities gives a state a capability to take the most time-consuming step toward nuclear weapons, yet remain within their Non-Proliferation Treaty (NPT) rights under Article IV of the NPT.

A host of possibilities can be envisioned for dealing with this problem as nuclear power expands worldwide. Among the suggestions to date are international monitoring of national facilities or some form of international authority over these facilities (perhaps including international ownership). The latter obviously brings up governance questions that are far from settled.

In a February 12, 2004, speech at the National Defense University, President Bush proposed several initiatives along the lines discussed here and in general accordance with measures also proposed by ElBaradei. A major difference was that President Bush proposed that, “Enrichment and reprocessing are not necessary for nations seeking to harness nuclear energy for peaceful purposes. The 40 nations of the Nuclear Suppliers Group should refuse to sell enrichment and reprocessing equipment and technologies to any state that does not already possess full-scale, functioning enrichment and reprocessing plants.” The Bush plan would create an international cartel, albeit the president also said: ” The world’s leading nuclear exporters should ensure that states have reliable access at reasonable cost to fuel for civilian reactors, so long as those states renounce enrichment and reprocessing.” The plan nevertheless would almost surely be considered by some, perhaps most, NPT parties to violate Article IV of the NPT. On the other hand, it would bypass the need for international agreement and enforcement and could be put into practice progressively as supplier states agreed. Possibly a combination of the Bush and the ElBaradei proposals could evolve if most states agreed to the substance of the two, but considerable negotiation would be required.

The current difficulties over Iran’s and North Korea’s capabilities and the proliferation network centered on Pakistan are only an early indication of what may come to pass as nuclear-related capabilities and demand for electricity worldwide increase. The leaders of the primary countries with nuclear capabilities should establish an international working group charged with developing a technical, administrative, and legal framework that will lay the groundwork for resolving the questions noted (and others like them) in a way that puts security first while safeguarding commercial and military interests. Technically, this is feasible. Politically, it is another matter. President Bush, in his February speech, took a step in that direction by proposing “the creation of a special committee of the IAEA Board which will focus intensively on safeguards and verification” and that “No state under investigation for proliferation violations should be allowed to serve on the IAEA Board of Governors–or on the new special committee.”

Toward improved governance

The various steps for improving materials control and facilities monitoring are necessary but not sufficient to ensure a secure nuclear future. There inevitably will be disputes, and probably even outright cheating, on the measures agreed to. Thus, the governments involved must agree on how these measures will be governed and enforced if they are to be effective. Some machinery for governance and enforcement does exist, principally through the IAEA and the UN Security Council. Opinions differ on how effective this machinery has been. There is general agreement, however, that the world’s governance and enforcement machinery, along with the technical and organizational measures, must be updated and strengthened if it is to be equal to the challenges posed by much wider diffusion of nuclear technologies worldwide and the existence of sophisticated international terrorist organizations. Such updating would at least entail the following:

Improved definition of what constitutes a violation of the NPT and what justifies inspections. The NPT is not a solution to all nuclear ills, but it remains the only widely accepted basis for evaluating international programs of cooperation in nuclear matters, whether involving assistance with civilian technologies or security against misuse of these technologies. An essential step in making the treaty adequate to a world in which nuclear weapons technologies are more widely available is to agree on a definition of what constitutes a violation. Today, countries that want an option to produce nuclear weapons can build facilities to make the necessary materials and come right up to the design and testing of the actual weapons, all without violating the NPT. Placing enrichment and separation facilities under international authority, in conjunction with tightening physical protection and improving accounting and inspection practices, would go a long way toward remedying that situation. If such authority is in place, then a refusal to abide by the authority’s standards or an attempt to evade its oversight should be defined as a violation of the NPT. That will not be agreed to easily. But important leverage can be provided if the countries that have or readily could have nuclear power can reach broad agreement that nuclear terrorism must be prevented.

Agreement within the UN Security Council and other key organizations, including the so-called P5 group of nations, on the steps to be taken after a violation. This is a necessary adjunct to the need for defining violations more precisely and effectively, but it will be far harder to achieve. Indeed, the recent dispute in the UN over the legitimacy of the invasion of Iraq demonstrates just how difficult it will be to reach such agreement. The other recommended steps can begin to be tackled by high-level commissions, if the political will is present, because they lie in areas where there is some commonality of purpose. In this case, however, there are very difficult gaps to bridge. One of the widest gaps now seems to exist between the United States and much of the rest of the world. On one side is a U.S. administration that sees the problem of terrorism as justifying strong unilateral action to make over the areas where international terrorism has some of its roots. On the other side are the countries, including many democratic allies of the United States, that maintain that the UN and its institutions, including the agreement not to make war unilaterally, lie at the very root of international security. Bridging this gap can only come as a process of gradual agreement in the course of dealing with problems of proliferation and terrorism as they arise. The case of Libya, in which that nation has taken steps toward peace and has been welcomed back into the community of nations, may be indicative of early success in combining diplomacy with the threat of enforcement. The cases of Iran and North Korea are more difficult and should be looked at as opportunities to build an effective and agreed-upon approach to enforcement. Whether that will be the case or not remains to be seen.

Appointment in the United States of a high-level presidential representative to push needed initiatives and coordinate relevant programs. The menu of needed steps will not go forward with sufficient speed (in fact, it may not go forward at all) without active leadership from the U.S. government. That leadership, in turn, will not be available unless the president backs the effort himself and names a personal representative at a suitably high level who is known to have his ear. The recent appointment of former Secretary of State James Baker to forgive or reschedule Iraq’s debts to other states is an example of the kind of representation needed. Accomplishing the full slate of needed tasks will take longer to carry out than a single presidential term. A presidential initiative of the sort envisaged, with backing from Congress, could give it enough momentum to last through several administrations, just as the nonproliferation initiatives of the past did.

Reducing demand for nuclear weapons

One of the major forces fueling the nuclear juggernaut, of course, is the demand for such weapons. Here too, solutions may be possible, but hard to achieve. Among proposed needs and actions:

Important leverage can be provided if the countries that have or readily could have nuclear power can reach broad agreement that nuclear terrorism must be prevented.

Recognize that the supply of nuclear materials and weapons cannot be completely controlled without cooperation from some of the very regimes that today cause concern. This point has become obvious with the recent revelations about Pakistani weapons and centrifuge technology trade with North Korea, Iran, and Libya. But it has been clear for some time that with the greater ease of making key nuclear weapons materials, supply could not be interdicted by actions from the traditional suppliers alone, even under the unlikely assumption that those suppliers could police all of their citizens and visitors. As a result, cooperation is needed from the very regimes that may come to the conclusion (on the basis of perceived security concerns or domestic politics, or both) that they need nuclear weapons. Yet, perceptions of security and the domestic politics with which these perceptions are intertwined are hard to alter from the outside. The United States and like-minded states cannot guarantee (or, with the occasional exception, even afford to improve significantly) the security of these regimes against foreign or domestic opponents. Nor can they much affect perceptions in states that seek nuclear weapons for prestige. As a result, universal adherence to nuclear nonproliferation must remain a long-term goal. Current trends in the perception of security and in domestic politics in states of concern, so far as they can be ascertained, are not favorable: Iran, Iraq, North Korea, and Pakistan are only current or recent examples. How Indonesia, Saudi Arabia, and others will evolve in these respects is unclear as well.

Place the highest priority on breaking any links between nuclear weapons capabilities and terrorist groups. Given the huge long-term nature of the task of controlling either proliferation or terrorism, some criterion for assigning priority must be established. The most obvious priority is to break any link that exists between a state possessing nuclear materials or capabilities and any terrorist group that has the intent and capability to harm either the United States or its allies or clients. However, placing an overriding priority on combating all terrorism could lead the United States into a much larger, more difficult, and infinitely more contentious endeavor. From a practical point of view, the first level of priority should be to break any links between states and state holdings of nuclear weapons and nuclear weapons materials and any subnational group. It is also important to break any such linkages related to certain kinds of advanced biological weapons capability.

Extend and clarify security assurances and the basis for extending them. The NPT, it can be argued, has been successful because of two factors: the past technological difficulty of making nuclear weapons and the discipline imposed by the Cold War on most nuclear-capable states. Both factors are now gone. With respect to nuclear proliferation, the bipolar order is gone and a unipolar order has not been established; indeed, the idea of a unipolar order is opposed by many of the states that would be natural partners in restoring effective nuclear nonproliferation measures. These measures on the demand side include security assurances and economic benefits for the states that adhere to the NPT, and enforcement threats (political, economic, and, if necessary, military) against those that do not. The assurances given by the nuclear weapons states to the effect that they will not attack the nonnuclear weapons states with nuclear weapons are clearly insufficient: states such as Iran, Pakistan, and Saudi Arabia are concerned not just about U.S. actions but also about the possible actions of their neighbors. The assurances thus must be broad-based, contingent on good international behavior, and, in essence, parallel to those of the UN Charter, under which the UN Security Council will consider action in case of attack by one state against another. Such broad assurances now exist only on paper, and the record does not support confidence in them. It will be extremely difficult to bring such confidence about. The obstacles standing in the way of bringing about such an international order have their roots both in the states that would provide the assurances and the states that need them. Coherent, consistent actions by the major powers may bring about some progress over the long term.

Exercise U.S. leadership in reducing both nuclear weapons and reliance on them. Although most of the current and previous U.S. strategies and policies place high priority on limiting or ending the proliferation of nuclear weapons and preventing nuclear terrorism, one aspect of policy now goes in the opposite direction: the new emphasis on nuclear weapons spelled out by the U.S. Department of Defense in its recent Nuclear Posture Review (NPR). According to the NPR, “U.S. nuclear forces still require the capability to hold at risk a wide range of target types. This capability is key to the role of nuclear forces in supporting an effective deterrence strategy relative to a broad spectrum of potential opponents under a variety of contingencies. Nuclear attack options that vary in scale, scope, and purpose will complement other military capabilities.” The NPR further states that new nuclear capabilities “must be developed to defeat emerging threats such as hard and deeply buried targets, to find and attack mobile and relocatable targets, to defeat chemical or biological agents, and to improve accuracy and limit collateral damage. Development of these capabilities, to include extensive research and timely fielding of new systems to address these challenges, is imperative.” However, a number of countries, including some key U.S. allies, maintain that these developments would violate U.S. obligations under the NPT, as well as the nation’s obligations undertaken in connection with the 2000 NPT Review Conference. Representatives of these countries, along with many people within and outside of the United States, such as the IAEA’s ElBaradei, believe that the nuclear weapons states must adhere to their obligations under the NPT if the treaty is to remain effective. The development of new nuclear weapons capabilities by current nuclear states can provide incentives for the development of nuclear weapons by other states and make the attainment of a unified and effective international stance against nuclear proliferation even more difficult.

Eyes on the prize

Although the challenges ahead are many, there are at least signs of progress. After the attacks of September 11, 2001, the Bush administration took an important step in the direction of breaking any link between nuclear capability and terrorist groups when it announced that countries that hosted or tolerated terrorists would be held responsible for the terrorists’ acts. In February 2004, President Bush proposed a seven-point plan to make it more difficult to sell nuclear equipment on the black market. The plan would place limits on the shipment of such equipment “to any state that does not already possess full-scale, functioning enrichment and reprocessing plants.” The president also proposed expanding his program to share intelligence on proliferation, and he called for the UN Security Council to require all states to criminalize nuclear weapons proliferation. Coupled with some of the practical measures advanced under various international protocols, these are all steps in the direction of greater security.

The comprehensive, sustained, internationally agreed-on program outlined here would go much further, and it is needed if the nuclear danger is to be avoided, given the direction of current technological and political conditions. Coherent and consistent leadership from the United States and other states is essential if the programs needed are to go forward with adequate speed. Only with such leadership and agreement among the affected countries can efforts at comprehensive security for the peaceful atom eventually be successful and a worldwide program with the thrust and durability of President Eisenhower’s Atoms for Peace program be implemented.


Michael May ([email protected]), professor emeritus at Stanford University and director emeritus of Lawrence Livermore National Laboratory, is at the Center for International Security and Cooperation, Stanford Institute for International Studies, Stanford, California. Tom Isaacs ([email protected]) is director of policy, planning, and special studies at Lawrence Livermore National Laboratory.

Needed: A Revitalized National S&T Policy

The proposed 2005 federal budget puts the nation at risk by shortchanging support for critical research activities.

A lot has been said and written recently about U.S. manufacturing job losses. Much of the focus, though, has been on the movement of U.S. jobs overseas. Not enough attention has been paid to the need to create new high-wage jobs in the U.S. economy. What actions should the United States be taking to achieve that goal?

A high-wage society has some obvious building blocks. They include a fair and equitable tax structure, an educated and skilled workforce, an efficient and robust transportation infrastructure, a modern communications infrastructure, and so on. But we would argue that any discussion of high-wage job creation should start with what military strategists refer to as “the tip of the spear.” And we firmly believe that in the economic competition for high-wage job creation, the tip of the spear is S&T. Just as in the case of national security, economic security depends on the United States remaining the world leader in S&T. If that leadership is lost, the nation’s capacity for high-wage job creation will soon atrophy.

Losing the capacity for high-wage job creation would leave the United States without an adequate response to the creative destruction that Joseph Schumpeter described as inherent in our capitalist system. The competition brought about by new technologies and new markets destroys companies and entire industries. The jobs that existed in those industries are lost, only to be replaced by new jobs in other industries and in companies that are nimble enough to take advantage of dynamic change. As Andrew Grove of Intel says, “Only the paranoid survive.”

If the United States is to lead in the 21st century, it must begin by recognizing that the world of the future will be shaped by new technologies and their rapid diffusion. Entire industries may disappear in the process or be utterly transformed. For example, the entire industry of recorded music is already being reshaped by the ease of downloading music from the Internet. Sales of recorded CDs have been dropping each year for the past few years. Today, blank CDs for making recordings at home substantially outsell recorded CDs. When you walk into a Staples or Office Depot store and see a big display of blank CDs for sale, you can be certain that most of those CDs are not destined to be used to store spreadsheets of data. Even the small number of high-profile lawsuits against individuals who burn discs of music without regard to copyrights has not appreciably altered this phenomenon. The music industry is still in search of a mechanism to adapt to a fundamentally new business environment brought about by the diffusion of two technologies: the Internet and cheap CD-burning drives.

The biotechnology industry is an example of an industry that has sprung up in a very short time. The basic patent for genetic engineering–the Cohen-Boyer patent on making recombinant DNA–was issued 30 years ago. No one at that time would have predicted that we would one day have a biological industry rivaling the chemical processing industry, which was already a century old in 1974.

The United States reaped enormous economic benefits from being the first country to lead in the development of the Internet and the harnessing of biotechnology. But these revolutions are far from being the last technological revolutions that we will see. The key questions as we look to the future are which countries will win the competition to develop new industries and new jobs based on future technological changes? Which countries will lose out? And, once the current wave of technological change has passed, which countries will be best positioned for the next inevitable wave of change?

The United States ignores these questions at its peril. After reviewing President Bush’s proposed 2005 budget for S&T, we are persuaded that the Bush administration is ignoring them now.

Technological revolutions

The United States is, in fact, in the middle of a set of interrelated technological revolutions that are reshaping existing industries and leading, in a number of cases, to entirely new industries. Lester Thurow’s recent book, Fortune Favors the Bold, refers to a number of the most important such revolutions:

  • Biotechnology, including the new frontier of developing “artificial life” forms
  • Microelectronics, including the continued miniaturization of electronic devices and the increasingly widespread diffusion of data-processing power
  • High-end supercomputing
  • Telecommunications technologies
  • Human-made materials (including materials in which the structure has been designed and built at the atomic or molecular level–the essence of nanotechnology)
  • Robotics

To these we would add new energy technologies, including renewable energy technologies that are as inexpensive as traditional fossil sources of energy, technologies using hydrogen as an energy carrier, and technologies for energy efficiency.

All these technologies are obviously crucial to our future. Will the United States play a leading role in their continued development? The answer is not that self-evident. In the 60 years since World War II, other countries and regions of the world have built S&T capabilities that rival or are destined to rival that of the United States. The governments of China, India, Japan, and the European Union have all targeted advancements in their research and innovation system as key elements of their plans for future national and regional economic prosperity. Even if the United States maintains a strong S&T policy, these other countries and regions will provide stiff competition. Unfortunately, though, just as this international challenge is becoming very clear, this administration appears to be sticking its head in the sand.

A look at the budget proposal for fiscal year (FY) 2005, submitted by President Bush, shows serious gaps in support for the kind of basic science and engineering that will be most important to the development of technologies and industries in the future. These include:

  • $660 million in cuts proposed for basic and applied research at the Department of Defense–the sort of research that has the greatest potential for dual use and effective spinoff to the civilian high-technology industries
  • $68 million in cuts proposed for the Department of Energy’s Office of Science, which is a major supporter of basic physical sciences and engineering research
  • $63 million in cuts proposed for energy conservation R&D at the Department of Energy
  • $183 million in cuts proposed for agricultural research
  • $24 million in cuts proposed for transportation research
  • Total elimination of the Advanced Technology Program (ATP) at the Department of Commerce, a loss of $171 million in FY 2005 for new technologies that would have otherwise been enabled and brought to commercial reality

The termination of the ATP is a particularly egregious step in the wrong direction, in light of the past accomplishments of the program and the current global competition in technology that the United States faces. To understand why this is the case, a brief explanation of the role and track record of the ATP is in order.

Between the stages of the R&D process in which the government predominantly invests (fundamental research) and in which industry predominantly invests (commercialization of reliably profitable products) lies what many call the technology “valley of death.” That’s the gap where private capital markets fail to invest applied research dollars to create so-called “platform” technologies. This market failure occurs because such generic technologies are too expensive or too risky for industry to develop on its own. Yet it is precisely these generic platform technologies that are the seed corn for new products, and in many cases entire new market categories. The benefits to industry generally and to our national economy of leadership in platform technologies far outweigh the costs of developing such technologies. Filling in this funding gap in the valley of death is precisely the role that the ATP has been designed to play for civilian technology.

The proposal to abandon the Hubble to find money to plan for a manned mission to Mars is a sad commentary on the scientific priorities of this administration.

In carrying out this role, the ATP has had a number of successes in preserving critical technology sectors in the United States and facilitating a leading U.S. position in others. These successes include preserving the printed wiring board industry, helping the U.S. automotive industry reduce dimensional variations in components from 5 to 6 millimeters to less than 2.5 millimeters, and stimulating the development of the DNA diagnostic tool industry.

To be sure, not all ATP projects have been as successful as these. The following statistics, though, put the overall program in perspective. The total cost of ATP funding to date has been about $2.1 billion. Preliminary results of a 2003 ATP survey of more than 350 companies indicate that actual economic value resulting from ATP joint ventures now exceeds $7.5 billion. Benefits from just a few projects analyzed to date are projected to exceed $17 billion when those platform technologies are fully exploited by the industries involved. That is an impressive social return on a modest government investment.

The rationale for the termination of the ATP in the president’s budget documents is truly perplexing. The entire discussion of the ATP, as it appears on page 233 of the Appendix Volume of the president’s budget request is as follows: “The ATP endeavors to help accelerate the commercialization of high-risk, broad-benefit enabling technologies with significant commercial potential. ATP is a merit-based, rigorously competitive, cost-shared partnership program that provides assistance to U.S. businesses and joint R&D ventures to help them improve their competitive position. The President’s 2005 Budget proposes to eliminate the program and, therefore, no funds are requested for FY 2005.”

That’s it. Literally, the president’s rationale is that the ATP is a great program, it helps our competitiveness, it is well run and effective; therefore, we are going to kill it. The real message is that programmatic success in S&T does not trump ideology in the current administration.

More cuts

Another aspect of the president’s budget that also underscores the low priority of S&T policy for the administration is the underfunding of important R&D programs that Congress has authorized, by overwhelming margins, and that President Bush has signed into law. A case in point is cybersecurity R&D. Every American knows that computer viruses and worms can cause real damage to the economy. In November 2002, Congress passed, and President Bush signed, the Cyber Security Research and Development Act, which authorized a significant program of R&D on computer and network security at the National Science Foundation (NSF). For FY 2005, those R&D authorizations amounted to just over $122 million. After signing the bill, the president had a complete budget cycle to develop a budget request incorporating the authorizations he signed into law. But no proposed funding in FY 2005 for NSF is designated for carrying out this law. In essence, President Bush’s signature on a law to increase R&D investment in cybersecurity meant nothing when it came time for his administration to put together the FY 2005 budget. Instead, NSF has opted on its own to attempt to carry out a fraction of the authorized program.

A similar situation has occurred in nanotechnology. Last year Congress passed, and President Bush signed, a major research authorization bill for nanotechnology. The contents of the bill were well known during the bulk of the budget cycle. For FY 2005, the bill provided for nanotechnology spending across five agencies of $809.8 million. The president chose to hold a formal signing ceremony at the White House for this bill, something that rarely happens with R&D-related legislation. The White House press release for the signing ceremony noted that the president had previously requested a 10 percent increase in nanotechnology funding in the FY 2004 budget. In the FY 2005 budget request, after the signing ceremony and the photo opportunity were over, the president requested only a 3 percent increase for the National Nanotechnology Initiative, as calculated by the Office of Management and Budget. Thus, before Congress passed the legislation, a 10 percent increase for nanotechnology, but after Congress passed the legislation, only a 3 percent increase. In addition, when one compares the president’s nanotechnology request for FY 2005 to the authorized levels that he signed into law in December, it turns out that President Bush requested $200 million less for nanotechnology R&D in the budget he sent Congress on February 2, compared to the authorization he signed into law only two months earlier.

Finally, there is a total disconnect between science and the administration’s plans for the space program. At the same time that President Bush is cutting, terminating, or failing to fully fund R&D programs with demonstrated effectiveness in creating jobs and wealth in this country, he is proposing a manned Moon-Mars initiative at NASA that is likely to yield little benefit to the nation. Most of the alleged technology spinoffs of past space exploration activities were substantially oversold. We did not invent Teflon, Velcro, or even Tang in the space program. To pay for the new Moon-Mars initiative, the president will take funds from other parts of NASA over the next few years. Beyond that, future presidents will have to direct substantial funds to manned space flight in order to keep the program on schedule.

We have already seen the first wrong-headed move at NASA in the area of diverting resources: the proposed abandonment of the Hubble Space Telescope, one of the premier scientific assets in all of NASA. The Hubble is still in its prime and is capable of continuing to make major discoveries about the universe and its formation. The proposal to abandon the Hubble to find money to plan for a manned mission to Mars is a sad commentary on the scientific priorities of this administration. Because of the outcry from the scientific community and from advocates such as Sen. Barbara Mikulski (D-Md.), this proposal is now getting a second review inside NASA. But it is too soon to say that it will be withdrawn. The fact that this termination was proposed in the first place, though, illustrates the low value placed on real science in the administration’s thinking about the nation’s future.

Other administration S&T policies are potentially just as deleterious as the cutbacks in funding in this year’s budget proposal. For example, visa and other immigration restrictions that have been put in place over the past 2 years are threatening the future vitality of the U.S. university system in sciences and engineering. Foreign-born students coming to this country have, for decades, been an important asset to the United States. After completing their training, many have stayed here to make significant contributions to basic science and to new products. They are a great source of strength to the U.S. innovation system and to the country. We have only to look at the current director of the National Institutes of Health, Elias Zerhouni, who was born in Algeria and came to the United States in his early 20s to train in diagnostic radiology at Johns Hopkins University in Baltimore.

Today, in the name of increasing national security, the Bush administration is making it extremely difficult for the best and brightest foreign students to come to the United States, to be educated, and to remain in this country and become citizens. Instead, the effect of our policies is to drive away from the United States scientists and engineers who want to come here to build a better life for themselves and our society.

The end result of these policies may well be that the brightest students from around the world will increasingly choose non-U.S. educational institutions for their advanced education. Major scientific meetings may also increasingly take place outside the United States. U.S. policies could thus have the effect of strengthening the innovation systems of other countries. We might well be encouraging high-wage job growth to take place overseas, instead of in the United States.

An agenda for Congress

We believe that as Congress moves forward with legislative action on S&T this year, it can and must do better than the president has done to date. We recommend several actions that Congress can take.

Congress can put more pressure on the president to beef up the White House Office of Science and Technology Policy (OSTP). One of the basic reasons why there seems to be so little leadership on S&T issues coming out of the White House may be that OSTP appears to be significantly and severely understaffed. The current science advisor is authorized, under law, to have six high-level deputies, and most past science advisors had extremely well-qualified individuals in all these positions. Under this administration, only two of those six positions have been filled. No attempt was made to adjust that staffing strategy after the events of September 11, 2001, put terrorism and homeland security on the president’s radar screen and homeland security R&D on the front burner. Accordingly, the president’s science advisor has appeared to have spent the bulk of his energy on terrorism-related issues, with the result that the overall health of our scientific and technical foundations has not received the attention that it otherwise could have received from a fully functioning OSTP.

Congress can require that the president actually prepare and make public an S&T policy. Having such a document is not a panacea in itself, but the discipline of having to sit down and write one might force the White House to give some thought and examination to the technological opportunities and revolutions facing us that we are about to miss.

In its annual Concurrent Resolution on the Budget, which serves as its own blueprint for action on the president’s budget request, Congress can insist that the whole Federal Science and Technology Budget receive better and more unified consideration. As a start, the relevant committees in the Senate could schedule annual joint hearings on the overall shape of national S&T spending. It might also be worth considering whether the functional structure of the budget itself should be revised to put the entire federal S&T budget in one budget category. This would not involve moving programs from the agencies in which they now reside, but it would mean that the government would simply use a common budget classification code (known as a “budget function”) for all S&T spending. This would improve the transparency of the real trends in the national budget for S&T. It would also allow Congress to more easily address S&T programs in a holistic manner.

Finally, Congress needs to take a strong role in resisting the cuts in R&D being proposed by the president in this budget, particularly to programs such as the ATP. Frankly, instead of terminating the ATP, the Bush administration should be looking to duplicate its strategies and successes in other federal agencies. For example, the Department of Energy, the Environmental Protection Agency, and the Department of Homeland Security could all benefit from having programs structured along the lines of the ATP, as part of the overall mix of programs in each agency to spur the development of new technology.

The one thing that we hope the Congress does not do is what the administration, unfortunately, has done. That is to lose focus on where the real source of our future national wealth and high-wage job creation opportunities lies. Our future national economic security depends crucially on the innovation and genius of our scientists and engineers, particularly in universities and other major laboratories that are supported by the federal government. We need to make well-reasoned choices about what our real priorities are. Developing and executing a coherent national S&T policy needs to be recognized as the priority that it in fact is.


Jeff Bingaman is a Democratic senator from New Mexico and the ranking Democratic member of the Senate Committee on Energy and Natural Resources. Robert M. Simon is the Democratic staff director of the committee. Adam L. Rosenberg is the 2003-2004 American Physical Society Congressional Science Fellow.

Improving Prediction of Energy Futures

Most energy-economic models do not provide policymakers with the information they need to make sound decisions.

When federal lawmakers pass–or do not pass–legislation related to the production and use of energy, their actions ripple across society. Their decisions affect not only the mix of fuels, the price of power, and the spread of pollution, but also federal deficits, corporate fortunes, and even national security. Thus, policymakers need to have in hand the best possible projections about the future demand, supply, and cost of various energy options. Unfortunately, a growing disconnect exists between politicians and the economists who develop those projections.

Various government agencies, as well as an array of universities, private consulting firms, and interest groups, have developed energy-economic models, some more sophisticated than others. Yet lawmakers increasingly feel that these models fail to answer, or even properly evaluate, their questions about the most effective means to achieve policy goals. Economists, meanwhile, complain that politicians do not ask clear questions of the models.

Part of the communication conflict results from the different natures of modelers and politicians: Whereas economists seek quantifiable measures and mathematical certainty, lawmakers deal with anecdotes, dueling stakeholders, and the human chaos of politics. But more fundamentally, a new relationship must develop between policymakers and modelers. Lawmakers need economists to help highlight the actions that would best achieve elected officials’ policy goals, such as the reduction of greenhouse gases to certain levels. Rather than offering only unsolicited advice on the benefits or shortcomings of particular policies, modelers need to provide policymakers with observations on the most effective legislative and regulatory steps to obtain policy objectives.

Critical but troubled

The energy-economic models that policymakers use are critical, because government policies clearly have an impact on the energy market. The development of electricity-generating technologies, for instance, will differ if Congress approves the Bush administration’s Clear Skies initiative rather than stricter pollution standards. That debate depends, in part, on the interrelated set of issues associated with energy, pollution, and national security, and those issues share complex interactions that energy-economic models can use to help estimate the future results of various policy options.

From a policymaker’s perspective, however, the current state of energy-economic modeling is disappointing. Lawmakers frequently see dueling forecasts as little more than lobbying tools for interest groups. Countering the environmentalists’ optimistic estimates of energy conservation opportunities, for instance, are downbeat studies promoted by industrialists. Policymakers, moreover, note the inaccuracies of past projections, and they wish economists were more upfront about the limitations of their models, the reality of uncertainties, and the range of possible scenarios. Lawmakers are skeptical of models that assume a static status quo, and they would like better accounting for technological innovations and “externalities,” such as pollution, health care, and reliability.

Despite such shortcomings, energy-economic models remain the logical means by which policymakers can plan and prepare for the future. But they must be used wisely. Just as people adjust plans in their daily lives as conditions change, so we must appreciate that energy-economic models are only current best guesses about the future.

Policymakers need to understand the limitations and biases of models, and modelers need to admit that energy projections have not been particularly accurate. During the 1960s, energy-economic models tended to underestimate future energy growth. Projections made in the 1970s, in contrast, tended to overestimate energy consumption and production. The energy shocks of the 1970s and the resulting reductions of energy consumption in response to higher energy prices slowly forced economists to substantially lower their consumption estimates. Those lowered projections proved to be fairly accurate, and modelers take pride in the fact that a key 10-year forecast made in 1990 was within 1.4 percent of the actual consumption of total energy in 2000.

Yet boastful economists largely ignore the fact that this forecast overestimated electricity and petroleum prices by approximately 25 percent. One would have expected cheaper-than-anticipated energy to cause more consumption. The fact that energy use remained low with relatively low prices suggests, first, that modelers did not account for technological and market changes that kept energy demand in check; and, second, that modelers underestimated the potential within the U.S. market for energy efficiency. Some researchers looking back at these modeling efforts have determined that modelers underestimated the rate of energy-saving technological change and thus assumed that measures to reduce energy use would require significantly higher energy costs. Researchers also have noted that later forecasts of oil and natural gas prices have not correlated with reality.

Revisiting predictions is a humbling and sometimes instructive exercise. Noted futurists can offer insights as well as miscalculations. H. G. Wells, for instance, presciently predicted in 1902 that transportation systems would be based on automobiles and freeways, yet he failed to account for the role of airplanes. Even Amory Lovins, who is given much credit for bucking conventional wisdom in the late 1970s and accurately predicting slow energy growth, was way off the mark when it came to estimating renewable energy’s market penetration.

Assumptions affect outcomes

Quite logically, models using different factors and assumptions will generate different results. From a policymaker’s perspective, those differences can be aggravating. Suppose, for instance, a lawmaker wants to understand the economic impact of imposing a carbon tax that is expected to reduce the output of greenhouse gases by 35 percent. One model suggests that such action would raise the nation’s economic activity by 1.5 percent, whereas another says that the gross domestic product would fall by 3 percent. What is a policymaker to think if models cannot give a clear answer to the question of whether this carbon tax will help or hurt the economy? The declaration that assumptions matter is not a satisfying response to elected officials wanting to make informed policy.

Policymakers must deal with an array of factors, yet most modelers focus on prices, in part because costs have a clear impact on consumer demand, but also because prices are measurable (and modelers, essentially, are measurers). As a result, the modeling community often ignores the numerous nonprice factors such as environmental quality, national security, unexpected outcomes, and “anomalous” behaviors that influence energy consumption and technological diffusion.

In addition, modelers largely avoid externalities such as the medical costs associated with health problems that result from the pollutants emitted by fossil fuel-fired power plants. Although these expenses are more than zero but less than infinite, most modelers, wanting to avoid uncertainties, tend to stick with zero. This approach is both unrealistic and distorting.

Most modelers also assume, perhaps inadvertently, that the status quo will continue. They tend to make projections based on historical averages, but the reality is that conditions and averages change, often as a result of new policies or technological innovation. History does not progress in a linear fashion, yet most models assume linear trend lines. Models tend to be useful if one wants to know about an unchanging future, which rarely occurs.

Modelers, moreover, typically underestimate uncertainties. No doubt predicting future social trends and technological change is difficult, if not impossible. Some futurists foresee a dramatically changing world, with mass customization and teleworking being just two of the trends that may transform markets. At the same time, new inventions, such as low-resistance electricity transmission, could revolutionize the generation, delivery, and use of electric power. Such uncertainties suggest that energy-economic models would be more useful if they outlined a broader range of possible developments.

Discontinuities, or rapid changes, also present enormous challenges to forecasters. It is often assumed, for instance, that any changes in Earth’s climate that result from increases in the concentration of greenhouse gases in the atmosphere will follow a linear progression. But some scientists believe that the climate will “snap”–change dramatically–when greenhouse gases reach a certain concentration. Predicting that point, of course, is impossible, even if its possibility is important to consider.

Policymakers need to demand that models be less opaque, that their biases and assumptions be made clear.

Predictions even sometimes conflict with common sense. One key model used in the United States, for instance, estimates that renewable energy technologies will not grow rapidly even if the price of renewable energy is zero. It is hard for policymakers to understand how free energy would not be popular in the market.

While economists themselves debate such shortcomings, often in journals unread outside the field, policymakers need to provide the direction and resources needed for modelers to tackle, if not totally resolve, the most serious problems. If lawmakers are to obtain the most accurate guidance on energy and environmental issues, then they must engage and challenge the modelers rather than simply be the passive recipients of advocacy campaigns laced with economic charts and tables.

Demanding improvement

The nation’s most prominent energy-economic forecasting tool is maintained by the Energy Information Administration (EIA) within the Department of Energy (DOE). The department uses this National Energy Modeling System (NEMS) each year to develop the Annual Energy Outlook. About one-10th of the EIA’s annual $82 million budget is devoted to this model and the analyses of deviations between its predictions and reality. Because the EIA tries to be policy-neutral, the agency does not assume a law’s impact until the legislation is implemented, and it subsequently struggles to provide clear estimates of the impact and effectiveness of alternative policies.

Numerous other federal agencies possess their own energy models, but there is little coordination and sometimes even outright disagreements. The EIA, for instance, estimates that the price of electricity from photovoltaic cells, which convert sunlight into electricity, will remain at a high 16 cents per kilowatt-hour, whereas the National Renewable Energy Laboratory, another division of DOE, predicts that solar prices will fall to a competitive 7.2 cents. At the same time, projections from the Environmental Protection Agency about the potential for energy efficiency tend to be far more optimistic than those of the EIA.

For policymakers to rely on a single model, of course, would be like putting all the federal eggs in one basket. Bureaucrats by themselves will oppose any integration, protecting their turf by arguing that their approach is the best. Policymakers, therefore, must demand the coordination of modeling efforts and a detailed analysis of conflicts. The federal government needs an interagency review, one that consistently highlights the assumption differences of various models, identifies their strengths and weaknesses, and identifies gaps in coverage.

Policymakers may need to ask more specific questions if modelers are to assess the potential for policy alternatives to achieve particular goals. At the same time, lawmakers must exert themselves as a key audience for the modelers’ work. To meet their needs, policymakers need to demand that energy-economic modelers provide more realistic ranges, cooperate with diverse specialists, account for externalities and nonprice factors, and consider the effects of technological innovation.

Modelers and lobbyists will undoubtedly always use economic forecasts to bolster their particular policy perspectives. In fact, interest groups devote substantial resources to justifying their positions with models, data, and scientific-appearing analysis. They sometimes finance researchers who share their biases and then widely promote the findings of those researchers. Although most models by government agencies and scientists are advanced without preconceived conclusions, even they are influenced by the modelers’ biases and assumptions.

Policymakers, therefore, need to demand that models be less opaque, that their biases and assumptions be made clear. This is certainly possible. As noted more than a decade ago by two respected policy analysts, M. Granger Morgan and Max Henrion, “There are some models, especially some science and engineering models, that are large or complex because they need to be. But many more are large or complex because their authors gave too little thought to why and how they were being built and how they would be used.” Many economists bury their analytical assumptions and inadvertently suggest that models are magical “black boxes” that foretell the future. Yet energy-economic projections simply reflect the modeler’s assumptions, and they are more valuable to policymakers when those hypotheses are made clear. Such clarity also would enable other modelers to replicate and evaluate the reasonableness of the assumptions.

Modelers are an esoteric fraternity. Debates within the energy-economic community can be active, if not heated. Modelers themselves criticize models, trying to highlight unrealistic assumptions and to scrutinize data sets. Like many technical experts, they suffer from disciplinary myopia, having the typical reluctance to cooperate with colleagues who have different expertise.

Because forecasting is only as good as a model’s assumptions, policymakers would benefit by demanding the involvement of diverse experts, including marketing gurus, environmental economists, and corporate planning specialists. Marketers are particularly important to engage, since they can help policymakers obtain a realistic sense of the potential of new technologies. Modelers tend to assume that technologies will be adopted only when their price becomes attractive. However, marketers (and any parent with teenage children at a shopping mall) understand that purchases are often made because of attributes other than price. In the energy world, an industrialist might buy a combined heat and power system because it would enhance reliability and security, not caring as much about the initial cost. The insights of marketers would enrich energy models, identifying the array of incentives that can advance technologies in the marketplace.

Policymakers also need to demand a clearer understanding of the economy’s uncertainty and flexibility. They must encourage forecasters to highlight the large range among energy and economic variables.

Lawmakers, in short, need to redefine their roles with modelers. The two groups, although working in different worlds with different demands and requirements, need each other. Unfortunately, many modelers ignore policymakers totally, focusing instead on arcane debates within their expert community. For those who do address policy, the typical approach is to use models as a lobbying tool for a particular policy. This view implies that modelers should try to influence the priority decisions made by lawmakers. An alternative approach would be for modelers to be responsive to policymaker requests for insights about which actions would most effectively achieve an identified goal.

It is the job of politicians to set policy goals for the economy and environment. They need help identifying what tools, such as incentives or controls, can best achieve those goals. No doubt some modelers will continue trying to influence the setting of goals, but they need to do a better job of analyzing policy tools and of helping policymakers understand the most effective legislative and regulatory actions.


Richard Munson ([email protected]) is executive director of the Northeast-Midwest Institute in Washington, D.C.

Real Numbers: Small cities, big problems

Cities are home to nearly half of the world’s population, and over the next 30 years most of the 2-billion-person increase in global population is expected to occur in cities and towns in poor countries. In many parts of the world, this represents a radical departure from what occurred during the past 25 years, when the pattern of growth was much more evenly divided between urban and rural areas.

The speed and scale of these changes present many challenges. For many observers the immediate concern is the massive expected increase in the numbers of urban poor. In many countries in the developing world, at least one in four urban residents is already estimated to be living in absolute poverty, and the manifestations of urban poverty are clearly visible in all major cities: overcrowded neighborhoods, high rates of crime, inadequate housing, and insufficient access to clean water, adequate sanitation, and other social services.

In thinking about an urban future, it is perhaps only natural to imagine a world in which everyone is living in mega-cities the size of Sao Paulo, Mexico City, Beijing, or Lagos. But that is not correct. In fact, the bulk of urban population growth for the foreseeable future will take place in far smaller cities and towns, a point that receives virtually no media recognition. This is particularly significant because smaller cities–particularly those under 100,000 in population–are generally extremely disadvantaged.

Effective governance will be required to manage the urban transformation. If current levels of service delivery are any indication, huge capital investments are likely to be required in small cities and towns in the developing world over the next 30 years.

A comprehensive study of these issues can be found in the National Academies’ recent report Cities Transformed: Demographic Change and Its Implications in the Developing World, which is available at www.nap.edu.

Cities face rapid population growth

According to the latest United Nations projections, virtually all of the world’s population growth over the next 30 years will occur in urban areas.

Urbanization moves to poor countries

Undoubtedly the most profound difference between the experience of the first half of the 20th century and today is that in the first half of the century urbanization was predominantly confined to countries that enjoyed the highest levels of per capita income. In the more recent past (and indeed in the foreseeable future), the most visible changes have occurred and will continue to occur in middle- and low-income countries.

Look beyond mega-cities

Large cities will play a significant role in absorbing future anticipated growth. But despite popular images to the contrary, mega-cities will not dominate. For the foreseeable future the majority of urban residents will continue to reside in much smaller urban settlements.

A public services gap

There is a large gap in access to basic public services between small and large cities. There are also large within-city differences with respect to access to basic public services.

Small cities, smaller incomes

There are a number of reasons to believe that economic conditions in small cities are often worse than those in large cities. Systematic evidence on this point is hard to find. But some suggestive evidence is available for Cote d’Ivoire. Between 1985-1998, the proportion of residents estimated to be living on less than $2 (U.S.) per day was consistently lower in Abidjan than in the other smaller cities. In the early 1990s, macroeconomic deterioration drove up the poverty rates in both Abidjan and in the secondary cities, but it never erased Abidjan’s advantage.


Barney Cohen ([email protected]) is senior program officer of the National Research Council’s Committee on Population.

Saving Earth’s Rivers

The preservation of ecosystem health must become an explicit goal of water development and management.

The odds do not look good for the future of the planet’s rivers. As populations and economies grow against a finite supply of water, many previously untapped rivers are being targeted for new dams and diversions, and already-developed rivers are coming under increased pressure. A number of major rivers, including the Colorado, the Indus, and the Yellow, are already so overtapped that they dry up before reaching the sea. Meanwhile, India is proposing to link all 37 of its major rivers in a massive water supply scheme, Spain plans to build 120 dams in the Ebro River basin, and China intends to transfer water from the Yangtze River north to the overstressed Yellow River basin. In the United States, a project has been proposed in Colorado in which a pipeline would capture Colorado River water at the state’s western boundary and move it eastward across the Continental Divide to the growing metropolitan areas of the Colorado Front Range.

These proposed projects will almost certainly add to the ledger of ecological damage already wrought on the planet’s rivers. Dams and diversions now alter the timing and volume of river flows on a wide geographic scale. According to Carmen Revenga and colleagues at the World Resources Institute, dams, diversions, or other infrastructure have fragmented 60 percent of the 227 largest rivers. Most of the rivers of Europe, Japan, the United States, and other industrialized regions are now controlled more by humans than by nature. Rather than flowing to the rhythms of the hydrologic cycle, they are turned on and off like elaborate plumbing works.

During recent decades, scientists have amassed considerable evidence that a river’s natural flow regime–its variable pattern of high and low flows throughout the year as well as across many years–exerts great influence on river health. Each aspect of a river’s flow pattern performs valuable work for the system as a whole (see table). For example, flood flows cue fish to spawn and trigger certain insects to begin a new phase of their life cycle; very low flows may be critical to the recruitment of riverside or riparian vegetation. When humans alter these natural patterns to supply growing cities and farms with water, generate electricity, facilitate river-based navigation, and protect expanding settlements from floods, the vitality and productivity of river ecosystems can become seriously degraded.

Societies have reaped substantial economic rewards from these modifications to rivers. However, because inadequate attention has been paid to the ecological side effects of this development, society has lost a great deal as well. In their natural state, healthy rivers perform myriad ecosystem services, such as purifying water, moderating floods and droughts, and maintaining habitat for fisheries, birds, and wildlife. They connect the continental interiors with the coasts, bringing sediment to deltas and coastal beaches, delivering nutrients to fish habitats, and maintaining salinity balances that sustain productive estuaries. From source to sea and from channel to floodplain, river ecosystems gather, store, and move snowmelt and rainwater in synchrony with nature’s cycles. The diversity and abundance of life in running waters reflect millions of years of evolution and adaptation to these natural rhythms.

In little more than a century, human societies have so altered rivers that they are no longer adequately performing many of their evolutionary roles or delivering many of the ecological services on which human economies have come to depend. Just as each river has a unique flow signature, each will have a different response to human disruptions of its flow regime. But in nearly every case the result will be a loss of ecological integrity and a decline in river health. In addition to harming the ecosystems themselves, these transformations also destroy many of the valuable goods and services on which people and economies rely.

The construction of Egypt’s High Dam at Aswan during the 1960s, for example, greatly altered the habitat and diversity of life in the northern extent of the Nile River. Of the 47 commercial fish species in the Nile before the dam’s construction, only 17 were still harvested a decade after the dam’s completion. Similarly, fisheries declined dramatically after completion in 1994 of the Pak Mun Dam on Thailand’s Mun River, a large tributary of the Mekong. Globally, the World Conservation Union estimates that 20 percent of the world’s 10,000 freshwater fish species are at risk of extinction or are already extinct. According to Bruce Stein and colleagues at NatureServe (a biodiversity information organization), 37 percent of freshwater fish species in the United States are to some degree at risk of extinction, as are 69 percent of freshwater mussel species.

For too long, government officials and water planners have allowed water development to proceed until the river flows and the life they support are severely compromised. The historical view of water development that has dominated up to the present time considers freshwater ecosystems to be resources that should be exploited for the growth of the human economy. Because the health of ecosystems themselves and the natural services they provide is not an explicit goal in this mindset, nature’s water needs go unrecognized and unspecified. For a period of time, this approach appears to work: Economies reap the rewards of additional irrigation, hydropower, and other human water uses, while the residual is still sufficient to sustain natural ecosystem functions to a reasonable degree. Over time, however, as human pressures on water systems increase, the share of water devoted to ecosystem functions declines to damaging levels. In much of the world, nature’s residual slice of the water pie becomes insufficient to keep ecosystems functioning and to sustain freshwater life.

It’s time for a shift to a new mindset, one that makes the preservation of ecosystem health an explicit goal of water development and management. It would recognize that the human water economy is a subset of the one provided by nature and that human societies depend on and receive valuable benefits from healthy ecosystems. To preserve these benefits, society needs to make what we call an “ecosystem service allocation”: a designation of the quantity, quality, and timing of flows needed to safeguard the health and functioning of river systems. This allocation implies a limit on the degree to which society can wisely alter natural river flows. Rather than freshwater ecosystems receiving whatever water happens to be left over after human demands are met–an ever-shrinking residual piece of the pie–they receive what they need to remain healthy. Modification of river flows for economic purposes could expand over time, but only up to the sustainability boundary defined by the flows allocated for ecosystem support.

Contrary to initial appearances, this limit on river alterations would not be a barrier to economic advancement but rather a necessary ingredient for sustainable development. Once human water extractions and flow modifications have reached the limit in any river basin or watershed, new water demands would be met not by further river manipulation but by raising water productivity–deriving more benefit out of the water already appropriated for human purposes–and by sharing water more equitably. In this way, establishing an ecosystem service allocation would unleash the potential for conservation, recycling, and efficiency to help society garner maximum value from rivers, including in-stream and extractive benefits.

In the Murray-Darling river basin in Australia, for example, water officials have capped withdrawals in an attempt to arrest the severe decline in the river’s ecological health. This cap on future water extractions provided a much-sought degree of certainty that existing rights to water use would be protected from future impingement and helped ensure that existing rights holders would enjoy their full allotment more of the time. Further, the cap is expected to create a strong incentive to improve water use efficiency and to raise water productivity (the value derived per cubic meter of water extracted). In fact, one study by the Australian Academy of Technological Sciences and Engineering and the Institution of Engineers in Australia projects a doubling of the size of the Murray-Darling basin economy over 25 years with the cap and water reforms in place.

Developing tangible policies.

Translating this ecological mindset for river management into tangible policies and management practices will not be easy. The challenge of managing rivers for ecological sustainability will require concerted action on two fronts. First, many more scientists must be enlisted in the task of defining the quantity, quality, and timing of water flows needed to protect river health, so that a sound foundation for decisionmaking is developed. Second, appropriate water policy tools and governance structures must be instituted to manage human demands for water within the scientifically defined sustainability boundaries.

Translating an ecological mindset for river management into tangible policies and practices will not be easy.

The scientific knowledge and tools for determining river flow conditions necessary to protect ecosystem health have advanced rapidly in recent years. Although such analyses once focused only on protecting minimum flow levels intended to keep rivers from going completely dry, scientists now understand the need to prescribe a full spectrum of flow conditions to sustain ecosystem health, ranging from normal low-flow levels to frequently recurring high-flow pulses and even occasional floods. Once dominated by fish biologists, assessments of river flow needs have become highly interdisciplinary, involving specialists in riparian and estuarine ecology, water quality, hydrology, and fluvial geomorphology, as well as fish biology.

The ecological knowledge and scientific methods used in assessing water management activities will likely continue to mature swiftly as societal demand for river protection or restoration grows, creating opportunities for river scientists to practice their trade in a growing number of places. A number of regulatory mandates or policy decisions are forcing changes in water management activities that will require scientific input. For example, at least 177 hydropower dams in the United States are scheduled for relicensing by the Federal Energy Regulatory Commission by 2010, providing opportunity to negotiate new license conditions that improve ecological conditions in the affected rivers.

The formulation and adoption of scientific recommendations remain problematic in many instances, however, as is made clear by the heated debates about scientific uncertainty in the Klamath River basin in Oregon. In recent years, a number of scientific analyses of the water needed to protect endangered salmon runs and other aquatic species in the Klamath basin have been debated by scientists, conservationists, governmental water agencies, and farming interests.

A number of daunting challenges commonly arise in the process of developing flow recommendations for rivers, including (1) the difficulty of translating ecological knowledge into a clear quantitative flow recommendation that can be implemented by water managers; (2) the tendency for uncertainties and data gaps to paralyze scientific deliberations; (3) a bias toward allocating water to activities with well-defined economic benefits, which causes many ecosystem services to be ignored or discounted in decisionmaking; (4) inadequate time frames or funding available for conducting assessments; (5) the lack of a clear process or timeline for implementing flow recommendations, which can dissuade many scientists from contributing the necessary time and effort to the process; and (6) an aversion on the part of scientists to offering quantitative recommendations if opportunities for improving them in the future are ill-defined.

Fortunately, these obstacles are being surmounted with increasing frequency. Despite highly publicized conflicts such as that over the Klamath, many reform projects are quietly moving forward. A Flow Restoration Database compiled by the Nature Conservancy lists more than 350 rivers globally for which flow restoration efforts are planned, under way, or completed.

One such place is the Savannah River, which forms the border between South Carolina and Georgia. Flow alterations from upstream dams have affected fish populations and severely limited the reproduction of bottomland hardwood trees in the river’s floodplain. More than 40 scientists from 20 state and federal agencies, academic institutions, and conservation organizations have been working collaboratively to develop flow recommendations to restore the river and floodplain ecosystem and estuary. Sponsored by the U.S. Army Corps of Engineers and natural resource agencies in the two states, the scientists in 2003 prepared a set of quantified flow recommendations that will form the basis of an adaptive flow-restoration program. The Corps is currently examining the feasibility of implementing the recommendations while meeting as many other demands for the river’s water as possible, and hopes to begin pilot-testing some of the recommendations as early as spring 2004. The inclusive and collaborative nature of the scientific process being employed on the Savannah River has garnered broad stakeholder support and enabled the Corps to address the water needs of the ecosystems along with other human demands as part of a comprehensive river basin planning process for the river. By identifying key aspects of the flow recommendations that can be implemented without contention from existing water users, flow restoration can begin and scientists can start to document the recovery of the ecosystem.

Flow restoration efforts are planned, underway, or have been completed on more than 350 rivers globally.

Another example from the Green River in Kentucky has demonstrated that significant movement toward ecological sustainability can sometimes be attained in just a few years. The Corps is working with conservationists and scientists to modify its dam operations on the Green River for ecological benefit. The Green River Dam, built in 1969, has been managed for two primary purposes: flood control and reservoir-based recreation. During the summer, the Corps maintained a high lake level behind the dam to maximize recreational benefits. Then, at the end of the summer season, the Corps would rapidly lower the lake level to provide storage capacity for controlling winter floods. As the lake level was being lowered, the rapid release of water from the dam would wreak havoc on the downstream river environment. River creatures adapted to the river’s naturally low and slow water levels in the fall season would get hit with an artificial flood. Discussions of these ecological problems began in 2000, and the Corps has already begun implementing a new operational plan for the dam that continues to support its original operating purposes while returning the river’s flow to a close semblance of its natural variability. Under a new Sustainable Rivers Project with the Nature Conservancy, the Corps’ leadership is now promoting similar flow restoration efforts at many other places in its portfolio of more than 630 dams.

Sweeping changes needed

Although these two examples demonstrate that important progress can be made through cooperative alliances between water managers, conservationists, and scientists, sweeping changes in existing water policies are needed to foster such activity on the thousands of other rivers needing such restoration or protection. Specifically, such policies need to allocate to river ecosystems an adequate supply of water to sustain their long-term health and productivity. We can cite two examples of progressive water policy–one at the state and one at the national level–that set appropriate limits on human alterations of river flows and that foster scientific assessment of sustainability boundaries.

South Africa’s 1998 National Water Act is a landmark in international water policy. It integrates public trust principles, recognition of ecosystem service values, and scientific understanding of ecosystem water needs in a way that could revolutionize that society’s relationship with rivers. Specifically, the law establishes a two-part water allocation system known as the Reserve. The first part is a nonnegotiable allocation to meet the basic water needs of all South Africans for drinking, cooking, sanitation, and other essential purposes. The second part is an allocation of water to ecosystems to sustain their health and functioning in order to conserve biodiversity and to secure the valuable ecosystem services they provide to society. Specifically, the act says, “the quantity, quality, and reliability of water required to maintain the ecological functions on which humans depend shall be reserved so that the human use of water does not individually or cumulatively compromise the long-term sustainability of aquatic and associated ecosystems.”

The water determined to constitute this two-part Reserve has priority over all other uses, and only this water is guaranteed as a right. The use of water for purposes outside the reserve, including, for instance, irrigation and industrial uses, has lower priority and is subject to authorization. One year after the law’s enactment, the government issued guidelines describing in detail how the Reserve should be determined. Many of the river scientists in South Africa are now engaged in quantifying the flow allocations that will constitute the ecological component of the Reserve in each major watershed.

In the United States, most states have the ability to grant, deny, and set conditions on permissions to extract water from state water bodies, giving them substantial potential to protect river flows. To be used effectively, however, state permitting programs must be directly keyed to the maintenance of ecological flow regimes, so that that the sum of all flow modifications in a river does not exceed the threshold defined for that place and time. The Florida Water Act, passed in 1972, provides for such protection through its mandate to set “minimum flows and levels” to protect ecological health in each river basin in the state. A “percent-of-flow-approach,” adopted by one of the state’s five water management districts, illustrates a mechanism for setting and protecting a sustainability boundary. In 1989, the Southwest Florida Water Management District began limiting direct withdrawals from undammed rivers to a percentage of the natural streamflow at the time of withdrawal. For example, cumulative withdrawals from the Peace and Alafia Rivers are limited to 10 percent of the daily flow; during periods of very low flow, withdrawals are prohibited completely. The district is now using percentage withdrawal limits that vary with seasons and flow ranges in order to better protect the ecological health of rivers under its jurisdiction.

Importantly, this mechanism preserves the natural flow regime of rivers by linking water withdrawals to a percentage of flow, specifically by ensuring that a major percentage of the natural flow is protected every day. If a new permit application would cause total withdrawals to exceed the threshold, denial of the permit is recommended unless the applicant can demonstrate that the additional withdrawals will not cause adverse ecological effects. This provision allows for flexibility but places the burden of proof on potential water users to show that their withdrawals would not harm the ecosystem.

Can we save Earth’s rivers? These examples of applied river science and progressive policy demonstrate that it is possible. But it will still require many countries to make a dramatic departure from the destructive path they are on.


Brian Richter ([email protected]) is director of the Freshwater Initiative of the Nature Conservancy in Charlottesville, Virginia. Sandra Postel is director of the Global Water Policy Project in Amherst, Massachusetts. They are the authors of Rivers for Life: Managing Water for People and Nature (Island Press, 2003).

The Hype about Hydrogen

We can’t use hydrogen’s long-term potential as an excuse to avoid taking action now on reducing greenhouse gas emissions.

Hydrogen and fuel cell cars are being hyped today as few technologies have ever been. In his January 2003 State of the Union address, President Bush announced a $1.2 billion research initiative, “so that the first car driven by a child born today could be powered by hydrogen, and pollution-free.” The April 2003 issue of Wired magazine proclaimed, “How Hydrogen Can Save America.” In August 2003, General Motors said that the promise of hydrogen cars justified delaying fuel-efficiency regulations.

Yet for all the hype, a number of recent studies raise serious doubts about the prospects for hydrogen cars. In February 2004, a study by the National Academies’ National Academy of Engineering and National Research Council concluded, “In the best-case scenario, the transition to a hydrogen economy would take many decades, and any reductions in oil imports and carbon dioxide (CO2) emissions are likely to be minor during the next 25 years.” Realistically, a major effort to introduce hydrogen cars before 2030 would actually undermine efforts to reduce emissions of heat-trapping greenhouse gases such as CO2.

As someone who helped oversee the Department of Energy’s (DOE’s) program for clean energy, including hydrogen, for much of the 1990s–during which time hydrogen funding was increased by a factor of 10–I believe that continued research into hydrogen remains important because of its potential to provide a pollution-free substitute for oil in the second half of this century. But if we fail to limit greenhouse gas emissions over the next decade, and especially if we fail to do so because we have bought into the hype about hydrogen’s near-term prospects, we will be making an unforgivable national blunder that may lock in global warming for the United States of 1 degree Fahrenheit per decade by midcentury.

Hydrogen is not a readily accessible energy source like coal or wind. It is bound up tightly in molecules such as water and natural gas, so it is expensive and energy-intensive to extract and purify. A hydrogen economy–a time in which the economy’s primary energy carrier would be hydrogen made from sources of energy that have no net emissions of greenhouse gases–rests on two pillars: a pollution-free source for the hydrogen itself and a fuel cell for efficiently converting it into useful energy without generating pollution.

Fuel cells are small, modular electrochemical devices, similar to batteries, but which can be continuously fueled. For most purposes, you can think of a fuel cell as a “black box” that takes in hydrogen and oxygen and puts out only water plus electricity and heat. The most promising fuel cell for transportation uses is the proton exchange membrane (PEM), first developed in the early 1960s by General Electric for the Gemini space program. The price goal for transportation fuel cells is to come close to that of an internal combustion engine, roughly $30 per kilowatt. Current PEM costs are about 100 times greater. It has taken wind and solar power each about 20 years of major government and private-sector investments in R&D to see a 10-fold decline in prices, and they still each comprise well under 1 percent of U.S. electricity generation. A major technology breakthrough is needed in transportation fuel cells before they will be practical.

Running a fuel cell car on pure hydrogen, the option now being pursued by most automakers and fuel cell companies, means the car must be able to safely, compactly, and cost-effectively store hydrogen onboard. This is a major technical challenge. At room temperature and pressure, hydrogen takes up some 3,000 times more space than gasoline containing an equivalent amount of energy. The DOE’s 2003 Fuel Cell Report to Congress notes that, “Hydrogen storage systems need to enable a vehicle to travel 300 to 400 miles and fit in an envelope that does not compromise either passenger space or storage space. Current energy storage technologies are insufficient to gain market acceptance because they do not meet these criteria.”

The most mature storage options are liquefied hydrogen and compressed hydrogen gas. Liquid hydrogen is widely used today for storing and transporting hydrogen. Indeed, for storage and fueling, liquids enjoy considerable advantages over gases: They have high energy density, are easier to transport, and are typically easier to handle. Hydrogen, however, is not typical. It becomes a liquid only at ­423 degrees Fahrenheit, just a few degrees above absolute zero. It can be stored only in a superinsulated cryogenic tank.

Liquid hydrogen is exceedingly unlikely to be a major part of a hydrogen economy because of the cost and logistical problems in handling it and because liquefaction is so energy-intensive. Some 40 percent of the energy of the hydrogen is required to liquefy it for storage. Liquefying one kilogram (kg) of hydrogen using electricity from the U.S. grid would by itself release some 18 to 21 pounds of CO2 into the atmosphere, roughly equal to the CO2 emitted by burning one gallon of gasoline.

Nearly all prototype hydrogen vehicles today use compressed hydrogen storage. Hydrogen is compressed up to pressures of 5,000 pounds per square inch (psi) or even 10,000 psi in a multistage process that requires energy input equal to 10 to 15 percent of the hydrogen’s usable energy content. For comparison, atmospheric pressure is about 15 psi. Working at such high pressures creates overall system complexity and requires materials and components that are sophisticated and costly. And even a 10,000-psi tank would take up seven to eight times the volume of an equivalent-energy gasoline tank or perhaps four times the volume for a comparable range (because the fuel cell vehicle will be more fuel efficient than current cars).

The National Academies’ study concluded that both liquid and compressed storage have “little promise of long-term practicality for light-duty vehicles” and recommended that DOE halt research in both areas. Practical hydrogen storage requires a major technology breakthrough, most likely in solid-state hydrogen storage.

Hydrogen has some safety advantages over liquid fuels such as gasoline. When a gasoline tank leaks or bursts, the gasoline can pool, creating a risk that any spark would start a fire, or it can splatter, posing a great risk of spreading an existing fire. Hydrogen, however, will escape quickly into the atmosphere as a very diffuse gas. Also, hydrogen gas is nontoxic.

Yet hydrogen has its own major safety issues. It is highly flammable, with an ignition energy that is 20 times smaller than that of natural gas or gasoline. It can be ignited by cell phones or by electrical storms located miles away. Hence, leaks pose a significant fire hazard, particularly because they are hard to detect. Hydrogen is odorless, and the addition of common odorants such as sulfur is impractical, in part because they poison fuel cells. Hydrogen burns nearly invisibly, and people have unwittingly stepped into hydrogen flames. Hydrogen can cause many metals, including the carbon steel widely used in gas pipelines, to become brittle. In addition, any high-pressure storage tank presents a risk of rupture. For these reasons, hydrogen is subject to strict and cumbersome codes and standards, especially when used in an enclosed space where a leak might create a growing gas bubble.

Some 22 percent or more of hydrogen accidents are caused by undetected hydrogen leaks. These leaks occur “despite the special training, standard operating procedures, protective clothing, electronic flame gas detectors provided to the limited number of hydrogen workers,” points out Russell Moy, former group leader for energy storage programs at Ford, in the November 2003 Energy Law Journal. Moy concludes that “with this track record, it is difficult to imagine how hydrogen risks can be managed acceptably by the general public when wide-scale deployment of the safety precautions would be costly and public compliance impossible to ensure.” Thus, major innovations in safety will be required before a hydrogen economy is practical.

An expensive fuel

A key problem with the hydrogen economy is that pollution-free sources of hydrogen are unlikely to be practical and affordable for decades. Indeed, even the pollution-generating means of making hydrogen are currently too expensive and too inefficient to substitute for oil.

Bridging the gap between current hydrogen technologies and the marketplace will require revolutionary conceptual breakthroughs.

Natural gas (methane, or CH4) is the source of 95 percent of U.S. hydrogen. The overall energy efficiency of the steam CH4 reforming process (the ratio of the energy in the hydrogen output to the energy in the natural gas fuel input) is about 70 percent. According to a 2002 analysis for the National Renewable Energy Laboratory by Dale Simbeck and Elaine Chang, the cost of producing and delivering hydrogen from natural gas, or producing hydrogen onsite at a local filling station, is $4 to $5 per kg (excluding fuel taxes), comparable to a gasoline price of $4 to $5 a gallon. (A kg of hydrogen contains about the same usable energy as a gallon of gasoline.) This is more than three times the current untaxed price of gasoline. Considerable R&D is being focused on efforts to reduce the cost of producing hydrogen from natural gas, but fueling a significant fraction of U.S. cars with hydrogen made from natural gas makes little sense, either economically or environmentally, as discussed below.

Water can be electrolyzed into hydrogen and oxygen by a process that is extremely energy-intensive. Typical commercial electrolysis units require about 50 kilowatt-hours per kg, an energy efficiency of 70 percent. The cost today of producing and delivering hydrogen from a central electrolysis plant is estimated at $7 to $9 per kg. The cost of onsite production at a local filling station is estimated at $12 per kg. Replacing one-half of U.S. ground transportation fuels in 2025 (mostly gasoline) with hydrogen from electrolysis would require about as much electricity as is sold in the United States today.

From the perspective of global warming, electrolysis makes little sense for the foreseeable future. Burning a gallon of gasoline releases about 20 pounds of CO2. Producing 1 kg of hydrogen by electrolysis would generate, on average, 70 pounds of CO2. Hydrogen could be generated from renewable electricity, but that would be even more expensive and, as discussed below, renewable electricity has better uses for the next few decades.

Other greenhouse gas­free means of producing hydrogen are being pursued. DOE’s FutureGen project is aimed at designing, building, and constructing a 270-megawatt prototype coal plant that would cogenerate electricity and hydrogen while removing 90 percent of the CO2. The goal is to validate the viability of the system by 2020. If a permanent storage location, such as an underground reservoir, can be found for the CO2, this would mean that coal could be a virtually carbon-free source of hydrogen. DOE is also pursuing thermochemical hydrogen production systems using nuclear power with the goal of demonstrating commercial-scale production by 2015. Biomass (plant matter) can be gasified and converted into hydrogen in a process similar to coal gasification. The cost of delivered hydrogen from gasification of biomass has been estimated at $5 to $6.30 per kg. It is unlikely that any of these approaches could provide large-scale sources of hydrogen at competitive prices until after 2030.

Stranded investment is one of the greatest risks faced by near-term hydrogen production technologies. For instance, if during the next two decades we built a hydrogen infrastructure around small CH4 reformers in local fueling stations and then decided that U.S. greenhouse gas emissions must be dramatically reduced, we would have to replace that infrastructure almost entirely. John Heywood, director of the Sloan Automotive Lab at the Massachusetts Institute of Technology, argues, “If the hydrogen does not come from renewable sources, then it is simply not worth doing, environmentally or economically.” A major technology breakthrough will be needed to deliver low-cost zero-carbon hydrogen.

The chicken-and-egg problem

Another key issue is the chicken-and-egg problem. At the National Hydrogen Association annual conference in March 2003, Bernard Bulkin, British Petroleum’s chief scientist, said that, “if hydrogen is going to make it in the mass market as a transport fuel, it has to be available in 30 to 50 percent of the retail network from the day the first mass-manufactured cars hit the showrooms.” Yet a 2002 analysis by Argonne National Laboratory found that even with improved technology, “the hydrogen delivery infrastructure to serve 40 percent of the light duty fleet is likely to cost over $500 billion.” Major breakthroughs in hydrogen production and delivery will be required to reduce that figure significantly.

Who will spend the hundreds of billions of dollars on a wholly new nationwide infrastructure to provide ready access to hydrogen for consumers with fuel cell vehicles until millions of hydrogen vehicles are on the road? And who will manufacture and market such vehicles until the infrastructure is in place to fuel those vehicles? Will car companies and fuel providers be willing to take this chance before knowing whether the public will embrace these cars? I fervently hope to see an economically, environmentally, and politically plausible scenario for how this classic chasm can be bridged; it does not yet exist.

Centralized production of hydrogen is the ultimate goal. A pure hydrogen economy requires that hydrogen be generated from CO2-free sources, which would almost certainly require centralized hydrogen production closer to giant wind farms or at coal/biomass gasification power plants in which CO2 is extracted for permanent underground storage. That will require some way of delivering massive quantities of hydrogen to tens of thousands of local fueling stations.

Tanker trucks carrying liquefied hydrogen are commonly used to deliver hydrogen today, but make little sense in a hydrogen economy because of liquefaction’s high energy cost. Also, few automakers are pursuing onboard storage with liquid hydrogen. So after delivery, the fueling station would still have to use an energy-intensive pressurization system. This might mean that storage and transport alone would require some 50 percent of the energy in the hydrogen delivered, negating any potential energy and environmental benefits from hydrogen.

Pipelines are also used for delivering hydrogen today. Interstate pipelines are estimated to cost $1 million per mile or more. Yet we have very little idea today what hydrogen generation processes will win in the marketplace during the next few decades, or whether hydrogen will be able to successfully compete with future high-efficiency vehicles, perhaps running on other pollution-free fuels. This uncertainty makes it unlikely anyone would commit to spending tens of billions of dollars on hydrogen pipelines before there are very high hydrogen flow rates transported by other means and before the winners and losers at both the production end and the vehicle end of the marketplace have been determined. In short, pipelines are unlikely to be the main hydrogen transport means until the post-2030 period.

Trailers carrying compressed hydrogen canisters are a flexible means of delivery but are relatively expensive because hydrogen has such a low energy density. Even with technology advances, a 40-metric-ton truck might deliver only about 400 kg of hydrogen into onsite high-pressure storage. A 2003 study by ABB researchers found that for a delivery distance of 300 miles, the delivery energy approaches 40 percent of the usable energy in the hydrogen delivered. Without dramatic improvement in high-pressure storage systems, this approach seems impractical for large-scale hydrogen delivery.

Producing hydrogen onsite at local fueling stations is the strategy advocated by those who want to deploy hydrogen vehicles in the next two decades. Onsite electrolysis is impractical for large-scale use because it would be highly expensive and inefficient while generating large amounts of greenhouse gases and other pollutants. The hydrogen would need to be generated from small CH4 reformers. Although onsite CH4 reforming seems viable for limited demonstration and pilot projects, it is impractical and unwise for large-scale application, for a number of reasons.

First, the upfront cost is very high: more than $600 billion just to provide hydrogen fuel for 40 percent of the cars on the road, according to Argonne. A reasonable cost estimate for the initial hydrogen infrastructure, derived from Royal Dutch/Shell figures, is $5,000 per car.

Second, the cost of the delivered hydrogen itself in this option is also higher than for centralized production. Not only are the small reformers and compressors typically more expensive and less efficient than larger units, but they also will likely pay a much higher price for the electricity and gas to run them. A 2002 analysis put the cost at $4.40 per kg (equal to $4.40 per gallon of gasoline).

We should not pursue a strategy to reduce greenhouse gas emissions in transportation that would undermine efforts to reduce emissions in electric generation.

Third, “the risk of stranded investment is significant, since much of an initial compressed hydrogen station infrastructure could not be converted later if either a noncompression hydrogen storage method or liquid fuels such as a gasoline-ethanol combination proved superior” for fuel cell vehicles. This was the conclusion of a 2001 study for the California Fuel-Cell Partnership, a Sacramento-based public-private partnership to help commercialize fuel cells. Most of a CH4-based investment would also likely be stranded once the ultimate transition to a pure hydrogen economy was made, because that would almost certainly rely on centralized production and not make use of small CH4 reformers. Moreover, it’s possible that the entire investment would be stranded in the scenario in which hydrogen cars simply never achieve the combination of popularity, cost, and performance to triumph in the marketplace.

In the California analysis, it takes 10 years for investment in infrastructure to achieve a positive cash flow, and to achieve this result requires a variety of technology advances in components and manufacturing. Also, even a small tax on hydrogen (to make up the revenue lost from gasoline taxes) appears to delay positive cash flow indefinitely. The high-risk and long-payback nature of this investment would seem far too great for most investors, especially given the history of alternative fuel vehicles.

The United States has a great deal of relevant experience in the area of alternative fuel vehicles that is often ignored in discussions about hydrogen. The 1992 Energy Policy Act established the goal of having alternative fuels replace at least 10 percent of petroleum fuels in 2000 and at least 30 percent in 2010. By 1999, some one million alternative fuel vehicles were on the road, only about 0.4 percent of all vehicles. A 2000 General Accounting Office report explained the reasons for the lack of success, concluding that, ” Fundamental economic impediments–such as the relatively low price of gasoline, the lack of refueling stations for alternative fuels, and the additional cost to purchase these vehicles–explain much of why both mandated fleets and the general public are disinclined to acquire alternative fuel vehicles and use alternative fuels.” It seems likely that all three of these problems will hinder hydrogen cars. Compared to other alternative fuels, such as ethanol and natural gas, the best analysis today suggests that hydrogen will have a much higher price for the fuel, the fueling stations, and the vehicles.

The fourth reason that producing hydrogen on-site from natural gas at local fueling stations is impractical is that natural gas is simply the wrong fuel on which to build a hydrogen-based transportation system. The United States consumes nearly 23 trillion cubic feet (tcf) of natural gas today and is projected to consume more than 30 tcf in 2025. Replacing 40 percent of ground transportation fuels with hydrogen in 2025 would probably require an additional 10 tcf of gas, plus 300 billion kilowatt-hours of electricity, or 10 percent of current power usage. Politically, given the firestorm over recent natural gas supply constraints and price spikes, it seems very unlikely that the U.S. government and industry would commit to natural gas as a substitute for even a modest fraction of U.S. transportation energy.

In addition, much if not most incremental U.S. natural gas consumption for transportation would likely come from imported liquefied natural gas (LNG). LNG is dangerous to handle, and LNG infrastructure is widely viewed as a likely terrorist target. Yet one of the major arguments in favor of alternative fuels has been their ability to address concerns over security and import dependence.

Finally, natural gas has too much economic and environmental value to the electric utility, industrial, and building sectors to justify diverting significant quantities to the transportation sector, thereby increasing the price for all users. In fact, using natural gas to generate significant quantities of hydrogen for transportation would, for the foreseeable future, undermine efforts to combat global warming.

Thus, beyond limited pilot stations, it would be unwise to build thousands of local refueling stations based on steam CH4 reforming or, for that matter, based on any technology not easily adaptable to delivery of greenhouse gas-free hydrogen.

The global warming century

Perhaps the ultimate reason why hydrogen cars are a post-2030 technology is the growing threat of global warming. Our energy choices are now inextricably tied to the fate of our global climate. The burning of fossil fuels–oil, gas and coal–emits CO2 into the atmosphere, where it builds up, blankets the earth, and traps heat, accelerating global warming. We now have greater concentrations of CO2 in the atmosphere than at any time in the past 420,000 years and probably at any time in the past 3 million years.

Carbon-emitting products and facilities have a long lifetime. Cars last 13 to 15 years or more; coal plants can last 50 years. Also, CO2 lingers in the atmosphere, trapping heat for more than a century. These two facts together create an urgency to avoid constructing another massive and long-lived generation of energy infrastructure that will cause us to miss the window of opportunity for carbon-free energy until the next century.

Between 2000 and 2030, the International Energy Agency projects that coal generation will double. The projected new plants would commit the planet to total CO2 emissions of some 500 billion metric tons over their lifetime, which is roughly half the total emissions from all fossil fuel consumed worldwide during the past 250 years. Building these coal plants would dramatically increase the chances of catastrophic climate change. What we need to build is carbon-free power. A March 2003 analysis in Science by Ken Caldeira and colleagues concluded that if our climate’s sensitivity to greenhouse gas emissions is in the midrange of current estimates, “stabilization at 4°C warming would require installation of 410 megawatts of carbon emissions-free energy capacity each day” for 50 years. Yet current projections for the next 30 years are for building just 80 megawatts per day. Because planetary warming accelerates over time and because temperatures over the continental United States are projected to rise faster than the average temperature of the planet, a warming of 4° C means that by mid-century, the U.S. temperature could well be rising as much per decade as it rose during the entire past century: one degree Fahrenheit.

Unfortunately, the path set by the current energy policy of the United States and countries in the developing world will dramatically increase emissions during the next few decades, which will force sharper and more painful reductions in the future when we finally do act. Global CO2 emissions are projected to rise more than 50 percent by 2030. From 2001 to 2025, the U. S. Energy Information Administration projects a 40 percent increase in U.S. coal consumption for electricity generation. And the U.S. transportation sector is projected to generate nearly half of the 40 percent rise in U.S. CO2 emissions forecast for 2025, which again is long before hydrogen-powered cars could have a positive impact on greenhouse gas emissions.

Two points are clear. First, we cannot wait for hydrogen cars to address global warming. Second, we should not pursue a strategy to reduce greenhouse gas emissions in the transportation sector that would undermine efforts to reduce greenhouse gas emissions in the electric generation sector. Yet that is precisely what a hydrogen car strategy would do for the next few decades. For near-term deployment, hydrogen would almost certainly be produced from fossil fuels. Yet running a fuel cell car on such hydrogen in 2020 would offer no significant life cycle greenhouse gas advantage over the 2004 Prius running on gasoline.

Further, fuel cell vehicles are likely to be much more expensive than other vehicles, and their fuel is likely to be more expensive (and the infrastructure will probably cost hundreds of billions of dollars). Although hybrids and clean diesels may cost more than current vehicles, at least when first introduced, their greater efficiency means that, unlike fuel cell vehicles, they will pay for most if not all of that extra upfront cost over the lifetime of the vehicle. A June 2003 analysis in Science by David Keith and Alex Farrell put the cost of CO2 avoided by fuel cells running on zero-carbon hydrogen at more than $250 per ton even with a very optimistic fuel cell cost. An advanced internal combustion engine could reduce CO2 for far less and possibly for a net savings because of the reduced fuel bill.

It would be bad policy for DOE to continue shifting money away from efficiency and renewable energy research toward hydrogen.

Probably the biggest analytical mistake made in most hydrogen studies, including the recent National Academies’ report, is failing to consider whether the fuels that might be used to make hydrogen, such as natural gas or renewable sources, could be better used simply to make electricity. For example, the life cycle or “well-to-wheels” efficiency of a hydrogen car running on gas-derived hydrogen is likely to be under 30 percent for the next two decades. The efficiency of gas-fired power plants is already 55 percent (and likely to be 60 percent or higher in 2020). Cogeneration of electricity and heat using natural gas is more than 80 percent efficient. And by displacing coal, the natural gas would be displacing a fuel that has much higher carbon emissions per unit of energy than gasoline. For these reasons, natural gas is far more cost-effectively used to reduce CO2 emissions in electric generation than it is in transportation.

The same is true for renewable energy. A megawatt-hour of electricity from a renewable source such as wind power, if used to manufacture hydrogen for use in a future fuel cell vehicle, would save slightly less than 500 pounds of CO2 as compared to the best current hybrids. That is less than the savings from using the same amount of renewable electricity to displace a future natural gas plant (800 pounds) and far less than the savings from displacing coal power (2,200 pounds).

As the June 2003 Science analysis concluded: “Until CO2 emissions from electricity generation are virtually eliminated, it will be far more cost effective to use new CO2-neutral electricity (such as wind) to reduce emissions by substituting for fossil-electric generation than to use the new electricity to make hydrogen.” Barring a drastic change in U.S. energy policy, our electric grid will not be close to CO2-free until well past 2030.

Major breakthroughs needed

Hydrogen and fuel cell vehicles should be viewed as post-2030 technologies. In September 2003, a DOE panel on Basic Research Needs for the Hydrogen Economy concluded that the gaps between current hydrogen technologies and what is required by the marketplace “cannot be bridged by incremental advances of the present state of the art” but instead require “revolutionary conceptual breakthroughs.” In sum, “the only hope of narrowing the gap significantly is a comprehensive, long-range program of innovative, high risk/high payoff basic research.” The National Academies’ study came to a similar conclusion.

DOE should focus its hydrogen R&D budget on exploratory breakthrough research. Given that there are few potential zero-carbon replacements for oil, DOE is not spending too much on hydrogen R&D. But given our urgent need for reducing greenhouse gas emissions with clean energy, DOE is spending far too little on energy efficiency and renewable energy. Unless DOE’s overall clean energy budget is increased, however, it would be bad policy to continue shifting money away from efficiency and renewable energy toward hydrogen. Any incremental money given to DOE should probably be focused on deploying the cost-effective technologies we have today in order to buy us more time for some of the breakthrough research to succeed.

The National Academies’ panel wrote that “it seems likely that, in the next 10 to 30 years, hydrogen produced in distributed rather than centralized facilities will dominate,” and so it recommended increased funding for improving small-scale natural gas reformers and water electrolysis systems. Yet any significant shift toward cars running on distributed hydrogen from natural gas or grid electrolysis would undermine efforts to fight global warming. DOE should not devote any R&D to these technologies. In hydrogen production, DOE should be focused solely on finding a low-cost zero-carbon source, which will almost certainly be centralized. That probably means we won’t begin the hydrogen transition until after 2030 because of the logistical and cost problems associated with a massive hydrogen delivery infrastructure.

But we shouldn’t be rushing to deploy hydrogen cars in the next two decades anyway, because not only are several R&D breakthroughs required but we also need a revolution in clean energy that dramatically accelerates the penetration rates of new CO2-neutral electricity. Hydrogen cars might find limited value as city cars in very polluted cities before 2030, but they are unlikely to achieve mass-market commercialization by then. That is why neither government policy nor business investment should be based on the belief that hydrogen cars will have meaningful commercial success in the near or medium term.

The priority for today is to deploy existing clean energy technologies and to avoid any expansion of the inefficient carbon-emitting infrastructure. If we fail to act now to reduce greenhouse gas emissions–especially if we fail to act because we have bought into the hype about hydrogen’s near-term prospects–future generations will condemn us because we did not act when we had the facts to guide us, and they will most likely be living in a world with a much hotter and harsher climate than ours, one that has undergone an irreversible change for the worse.

Recommended reading

American Physical Society, “The Hydrogen Initiative,” March 2004 (www.aps.org).

Center for Energy and Climate Solutions (www.coolcompanies.org).

National Academy of Engineering and National Research Council, The Hydrogen Economy: Opportunities, Costs, Barriers, and R&D Needs, February 2004 (www.nap.edu/books/0309091632/html).

National Renewable Energy Laboratory, “Hydrogen Supply: Cost Estimate for Hydrogen Pathways,” July 2002 (www.nrel.gov/docs/fy03osti/32525.pdf).

U.S. DOE, “Basic Research Needs for the Hydro-gen Economy,” 2003 (www.sc.doe.gov/bes/hydrogen.pdf).

U.S. DOE, Hydrogen, Fuel Cells and Infrastructure Technologies Program (www.eere.energy.gov/hydrogenandfuelcells/).


Joseph J. Romm ([email protected]) is executive director of the Center for Energy and Climate Solutions in Arlington, Virginia, and a principal with the Capital E Group in Washington, D.C. During the Clinton administration, he served as Acting Assistant Secretary of Energy for Energy Efficiency and Renewable Energy. He is author of The Hype about Hydrogen: Fact and Fiction in the Race to Save the Climate (Island Press, 2004).

A 21st-Century Role for Nuclear Weapons

New security challenges and improved conventional weapons mean new roles and requirements for nuclear weapons.

The proliferation of weapons of mass destruction (WMD) has become a metaphor for 21st-century security concerns. Although nuclear weapons have not been used since the end of World War II, their influence on international security affairs is pervasive, and possession of WMD remains an important divide in international politics today.

Although the West had doubts about the military usefulness of nuclear weapons during the Cold War, archival evidence confirms that the Soviet Union would have used nuclear weapons from the outset had war broken out in Europe. Ironically, this has tended to support the view that the West’s nuclear posture over 40 years of rivalry with the Soviet Union actually had a stabilizing effect. This in turn may provide some explanation of the motives for the proliferation of WMD in the 21st century–a position that is bolstered by the futility of investment in conventional defense in light of advances in the military application of information technology in the United States and its principal allies.

The nuclear postures of the former Cold War rivals have evolved more slowly than the fast-breaking political developments of the decade or so that has elapsed since the former Soviet Union collapsed. Nevertheless, some important changes have already taken place. By mutual consent, the Anti-Ballistic Missile (ABM) Treaty of 1972 was terminated by the United States and Russia, which have agreed to modify their nuclear offensive force posture significantly through a large reduction in the number of deployed delivery systems. Nuclear weapons are no longer at the center of this bilateral relationship. Although the two nations are pursuing divergent doctrines for their residual nuclear weapons posture, neither approach poses a threat to the other. The structure, but not the detailed content, of the future U.S. nuclear posture was expressed in the 2002 Nuclear Posture Review (NPR), which established a significant doctrinal shift from deterrence to a more complex approach to addressing the problem of proliferated WMD.

The Russian doctrinal adaptation to the post-Cold War security environment is somewhat more opaque. The government appears to be focused on developing and fielding low-yield weapons that are more suitable for tactical use, though the current building of new missiles and warheads may be associated with new strategic nuclear payloads as well. Despite the diminished post­Cold War role of nuclear weapons in the United States, the cumulative deterioration of Russia’s conventional military force since 1991 has actually made nuclear weapons more central to that government’s defense policy.

The end of the adversarial relationship with the Soviet Union (and later, the Russian Federation) had to be taken into account in the NPR. The current nuclear posture is evolving in a manner parallel to the modernization of the U.S. non-nuclear military establishment. In stark contrast to Cold War­era military planning, the 21st century is likely to be characterized by circumstances in which the adversary is not well known far in advance of a potential confrontation.

The U.S. Department of Defense (DOD) is adjusting to these new circumstances by developing highly capable and flexible military forces that can adapt to the characteristics of adversaries as they appear. This makes the traditional path to modernization through investment in weapons systems as the threat emerges economically infeasible. Modern information technology lets the military change the characteristics of its flexible weapons and forces in much less time than it would take to develop whole new weapons systems. Thus, DOD is attempting to create a military information system: the integrated effect of command-control-communications-computation-intelligence-surveillance and reconnaissance (C4ISR). This system is inherently more flexible for adapting to changes in the threat environment.

These developments raise the question of what changes in the U.S. nuclear posture have been fostered by radical changes in the international security environment. In the 1990s, the United States selected eight (from the 32 in the inventory at the time) nuclear weapon types (with one additional type in reserve)–two for each of the four primary delivery systems [heavy bomber, cruise missile, land-based intercontinental ballistic missile (ICBM), and submarine-launched ballistic missile (SLBM)], to be preserved in perpetuity. The Stockpile Stewardship Program is developing a set of diagnostic and experimental facilities that will keep these eight weapons in the inventory indefinitely without sacrificing weapon safety and reliability. Most nuclear missions, apart from the area-destruction and shallow earth-penetration missions, were eliminated by these choices.

The NPR focuses on the limitations of applying weapons designed for the assured-destruction mission against the former Soviet Union to a much less predictable range of future adversaries and targets. The Bush administration concluded that it could dispense with thousands of weapons in the current stockpile, and it reached a bilateral agreement with Russia to institutionalize a reciprocal reduction in numbers of nuclear delivery systems and their associated nuclear payloads.

Thus, the need to be able to credibly produce (or threaten to produce) vast urban and industrial destruction in the 21st century has been sharply reduced. Moreover, the cumulative impact of information technology on U.S. non-nuclear capabilities has dramatically increased the ability to locate and destroy targets precisely. What role is left, therefore, for the unique properties of nuclear weapons–their intense nuclear and thermal radiation and extraordinary energy density–in a 21st-century security policy?

The 20th-century legacy

Coping with the threat of Soviet military power required a specialized nuclear posture in the 20th century. The concept of demonstrative use, in which a nuclear weapon would be used to intimidate rather destroy an enemy, that had characterized the early years of the nuclear program (1945­1953) was replaced with concepts that explicitly coupled nuclear weapons to military as well as political purposes. Nuclear weapons in the thousands were needed to meet the security requirements that evolved from the defense policy developed by President Eisenhower in the early part of 1953. Planners expected to replace these with more modern versions over time. The modern optimized designs at the leading edge of scientific knowledge would be lighter, smaller, safer, and more effective. But they would not be designed for an indefinite service life, and their safety and reliability were tied to manufacturing processes and technologies that turned out to be incompatible with future engineering, environmental, and occupational safety and health standards.

These circumstances during the Cold War created a need for a large integrated nuclear weapons design, engineering, testing, and manufacturing complex capable of continuous modernization and series production as well as support for an inventory of thousands of weapons. Technical problems that emerged over the life of the nuclear weapon could be resolved through explosive testing of nuclear munitions. The availability of funds to continually upgrade the nuclear weapons stockpile and the opportunity to test the weapons themselves contributed to high confidence in the safety and reliability of the deployed weapon systems. An added measure of confidence was created by the retention of large numbers of obsolescent or nondeployed weapons in the inventory, which served as a hedge against unanticipated problems in the ability of the manufacturing complex to deliver nuclear weapon components or to complete systems when required.

The nuclear weapons manufacturing complex was significantly scaled back after the end of the Cold War in the hope that only weapon remanufacturing rather than series production would be required in the future. Nevertheless, the need to support the Stockpile Stewardship Program in the face of requirements for preserving eight weapon types without explosive testing still required a large complex. Until the U.S. Senate rejected the Comprehensive Test Ban Treaty in 1999, there was a prohibition on nuclear testing and the development of new weapon designs with a yield of less than five kilotons. In response to the Senate rejection of the treaty, the Bush administration adopted a conditional readiness policy to resume testing if required, and new statutory authority has been granted to develop new weapon designs. However, avoiding or limiting requirements for testing is clearly preferred. Work is proceeding to develop several major diagnostic and experimental facilities that are aimed at simulating the nuclear explosive sequence without actual testing. These may also help reduce the cost and compress the time required to create new weapon designs or modify existing ones.

New designs that could be used on a number of different platforms could reduce the cost and the number of weapons that would have to be stockpiled.

The eight legacy weapons were designed for a specific set of military applications in support of a policy to deter a well-understood adversary. A series of bilateral initiatives successively reduced the possibilities for miscalculation. Yet the ability to limit damage from a nuclear attack in the event that deterrence failed was denied by treaty from 1972 to 2002.

The new security environment

WMD and the means to deliver them are mature technologies, and knowledge of how to create such capabilities is widely distributed. Moreover, the relative cost of these capabilities declined sharply toward the end of the 20th century. Today, the poorest nations on earth (such as North Korea and Pakistan) have found WMD to be the most attractive course available to meet their security needs. Proliferation of WMD was stimulated as an unintended consequence of a U.S. failure to invest in technologies such as ballistic missile defense that could have dissuaded nations from investing in such weapons. The United States’ preoccupation with deterring the Soviet Union incorporated the erroneous assumption that success in that arena would deter proliferation elsewhere. This mistake was compounded by the perverse interaction between defense policy and arms control in the 1990s. Misplaced confidence was lodged in a network of multilateral agreements and practices to prevent proliferation that contributed to obscuring rather than illuminating what was happening. Confidence placed in the inspection provisions of the Nuclear Non-Proliferation Treaty (NPT), for example, obscured efforts to obtain knowledge of clandestine WMD programs. NPT signatories were among those nations with clandestine WMD programs.

Without a modernization of defense policy, the ready availability of WMD-related technology will converge with their declining relative cost and a fatally flawed arms control structure to stimulate further proliferation in the 21st century. The process whereby WMD and ballistic missile technology has proliferated among a group of nations that otherwise share no common interests is likely to become the template for 21st-century proliferation.

The scope of this problem was recognized in part as a result of a comprehensive review of intelligence data in 1997­1998 by the Commission to Assess the Ballistic Missile Threat to the United States (the Rumsfeld Commission). This recognition swiftly evolved into a set of significant policy initiatives that responded to changes in the international security environment. The arms control arrangements most closely identified with the adversarial relationship with the former Soviet Union were passé. In 1999 the Senate refused to ratify the Comprehensive Test Ban Treaty; the United States and Russia ended the 1972 ABM Treaty and agreed to jettison the START process, which kept nuclear deployments at Cold War levels in favor of much deeper reductions in offensive forces in 2002.

U.S. policy began to evolve in response to these developments. The incompatibility between the Cold War legacy nuclear posture and the 21st-century security environment stimulated a search for approaches to modernize policies pertinent to nuclear weapons. In response to statutory direction, the Bush administration published the Quadrennial Defense Review, the Nuclear Posture Review, the National Defense Strategy of the United States, and the National Strategy to Combat Weapons of Mass Destruction. Taken together, these documents constitute the most profound change in U.S. policy related to nuclear weapons since the Eisenhower administration.

These policy documents in turn reflected the administration’s shift from a planning model based on specific threats (where military forces could be optimized against a known threat) to one based on capabilities. The administration sought to transform U.S. military capabilities to serve national policy in an environment where emerging threats would not provide warning to governments in enough time for them to replace their inventory with weapons systems suitable to the new threat.

The transformation aspirations reflected in the administration’s defense policy had a nuclear component as well, although the role of nuclear weapons would be much smaller than during the Cold War. Two new elements that were not part of Cold War strategic forces affected the role of nuclear weapons powerfully. The agreement to terminate the ABM treaty allowed ballistic missile defense to be introduced into the strategic equation. And the cumulative effect of advances in non-nuclear weapons (especially precision strike capability and information-intensive conventional military operations) dramatically expanded the ability of non-nuclear weapons to hold adversary strategic targets at risk throughout the threat cycle, thus diminishing the need to use nuclear weapons for this purpose.

Accuracy is now largely independent of the range of a weapons system, and increasingly persistent surveillance is available in almost every corner of Earth. Yet these capabilities can be delivered with a much smaller force structure today than previously. Advances in the military applications of information technology are letting the United States and similarly equipped allies substitute bandwidth (a proxy measure for information content) for force structure. Bandwidth used by military forces has increased by a factor of 100 since Operation Desert Storm in 1991, whereas the number of troops in U.S. forces has declined by one-third.

These capabilities allow a sharply focused military campaign to concentrate the application of military power on attacking targets (by electronic or kinetic means) in order to achieve specified military and political aims. This emphasis on effects-based military operations minimizes collateral damage, logistics requirements, and the duration of a conflict. All this has profound implications for the future role of nuclear weapons.

New weapons for new conditions

The unique capabilities of nuclear weapons may still be required in some circumstances, but the range of alternatives to them is much greater today. The evolution of technology has created an opportunity to move from a policy that deters through the threat of massive retaliation to one that can reasonably aspire to the more demanding aim–to dissuade. If adversary WMD systems can be held at risk through a combination of precision non-nuclear strike and active defense, nuclear weapons are less necessary. By developing a military capability that holds a proliferator’s entire WMD posture at risk rather than relying solely on the ability to deter the threat or use of WMD after they have been developed, produced, and deployed, the prospects for reducing the role of WMD in international politics are much improved.

Although a detailed discussion of the likely path of the problem of WMD proliferation is beyond the scope of this article, the macro trends are well known. The relative cost of procuring a WMD capability is declining, while the opportunities to conceal these programs (for example, through advanced tunneling technology to create underground development, manufacturing, and storage sites) are improving. A defense posture that makes it extremely difficult for a potential proliferator to reasonably expect that it will be able to field and use an effective WMD capability is more likely to dissuade him from acquiring such a capability in the first place.

An important dimension of the policy shift is the need to be able to hold WMD at risk before their operational use. A wide range of circumstances could produce the use or threat of use of such capabilities that might not necessarily be affected by deterrence alone. The need to preemptively target WMD capabilities is driven by the nature of these weapon systems. A single nuclear or biological weapon could produce tens or even hundreds of thousands of casualties.

The military capabilities needed to implement the new approach are formidable. Although the unique effects of nuclear weapons have a role in this policy under a narrow range of circumstances, the decisive enabler is a highly effective C4ISR system. The Defense Science Board described the capabilities and concepts of operation needed to implement this policy in its 2003 report The Future of Strategic Strike.

Because nuclear weapons may be required in certain circumstances to destroy biological pathogens, their effectiveness for this purpose must be evaluated.

The 21st-century proliferation problem creates a set of targets significantly different from those that existed during the Cold War. Few targets can be held at risk only by nuclear weapons, but the ones that are appropriate may require different characteristics and, in many circumstances, different designs than those currently in the nuclear stockpile. The nature of the targets and the scope of the potential threat also alter the character of the underlying scientific, engineering, and industrial infrastructure that supports the nuclear weapons posture. Some of the desirable characteristics of 21st-century nuclear weapons and the supporting infrastructure include the following:

Low maintenance. The current stockpile is based on nuclear weapon designs that have stayed in the inventory well beyond their anticipated life (the average age of the weapons in the stockpile is approximately 20 years). To maintain a high level of safety and reliability, costly and complex maintenance is required. Weapon designs that focus on achieving a high order of weapon safety and reliability with very low maintenance are needed.

Tailored effects. Nuclear weapon designs that provide an ability to attack a wide variety of targets, including biological weapons, heavily fortified and deeply buried targets, and other missions, are more appropriate than the present stockpile, which is designed primarily to attack targets across a wide area or hardened military targets on or near the surface.

Just-in-time development and manufacturing. The current complex is not responsive to the tempo of changes in the threat. This problem has been mitigated by keeping a nuclear weapons stockpile that is larger than operationally necessary. The development of a manufacturing complex that could create new designs (or modify existing ones) and manufacture weapons in the quantities needed when the threat emerges is more appropriate to 21st-century conditions.

Cross-platform warhead designs. The current stockpile incorporates two weapon designs for each of four types of delivery platforms (ICBM, SLBM, bomber, and cruise missile). Although stockpile diversity is a prudent hedge with the current stockpile, new designs that could be used on a number of different platforms could reduce the cost and the number of weapons that would have to be stockpiled.

High levels of safety and reliability. Weapon safety has always been a central issue of stockpile design, and the reliability of weapon performance could be maintained by cross-targeting and modernization. The terrorist threat makes it desirable to make weapons even safer and less vulnerable to unauthorized use, while always performing in a predictable and reliable manner, even in uses not contemplated when the weapons were designed.

No special delivery system requirements. Stockpile weapons are designed and optimized for a specific delivery system, and this was especially true for tactical aircraft. These airplanes required extensive modifications to be certified for nuclear delivery. In the future, nuclear weapons should be designed as much as possible to be independent of the platform used to deliver them.

Integration of nuclear weapons planning into conventional operations. The use of nuclear weapons was historically a specialized mission that was separate from conventional military operations (except in the case of North Atlantic Treaty Organization operations during the Cold War). Although strict control by the president over nuclear release is still a critical requirement, the planning process for 21st-century security requires that nuclear weapons be more integrated with advanced conventional weapons and forces.

Reuse of tested warhead designs where possible. Although only a few of the nuclear weapons developed during the Cold War were retained for the stockpile, a much larger number of fully developed and tested designs were created. Some of these designs could be reused, although some modifications are inevitable in both design and manufacturing as well as deployment modes to make them suitable for 21st-century needs.

Readiness to test new weapon designs, design modifications, or manufacturing process changes. The Senate decisively rejected a permanent ban on nuclear testing in 1999, reversing a policy in place since 1992. The Stockpile Stewardship Program may significantly diminish but will not wholly eliminate the need to test. Testing can materially add to confidence in a particular design in some circumstances, and there is a need to revive fundamental research in nuclear weapons physics.

Inclusion of the requirement to defeat biological weapons. Little is known about what is necessary to destroy biological weapon pathogens, but some analysis suggests that non-nuclear munitions may be ineffective. Because nuclear weapons may be required in certain circumstances to destroy biological pathogens, their effectiveness for this purpose must be evaluated.

It is impossible, of course, to permanently rule out a requirement for the sort of area destruction that was the hallmark of Cold War­era deterrence. Hence, there is likely to be an enduring requirement to retain a measure of this capability in the national inventory. In addition, although most targets are vulnerable to precision non-nuclear weapons, an irreducible subset of these targets can only be held at risk by the unique properties of nuclear weapons. In this respect, nuclear weapons are still vital to U.S. national security.

The refocusing of U.S. nuclear weapons policy and programs from the relatively narrow notion of deterrence to the broader aspiration of dissuasion has a better chance of working than the failed techniques of 20th-century nonproliferation policy: international norms, exhortation, and economic sanctions. Libya’s decision to abandon its own WMD programs in December 2003 after decades of obfuscation and clandestine investment suggests that at least some governments can be dissuaded from their efforts to acquire WMD by more direct and explicit threats to their WMD programs. Moreover, Libya’s decision has exposed a vast secret infrastructure of nuclear weapons technology and critical components involving, directly or indirectly, more than a dozen nations. This discovery will have to be considered in planning international efforts to limit proliferation.

It remains to be seen whether the Bush administration’s plan for combating WMD as will be successful. The integration of a modern global C4ISR system, advanced conventional weapons, and a modernized nuclear weapons development and manufacturing complex is a costly, prolonged, and complex affair; characteristics that are often difficult for democratic regimes to sustain. Nevertheless, the preliminary evidence suggests that the first quarter of the 21st century may offer the best hope for recreating international norms against WMD proliferation.


William Schneider, Jr., ([email protected]) is chairman of the Defense Science Board of the Department of Defense.

Atoms for Peace after 50 Years

President Eisenhower’s hopes for nuclear technology still resonate, but the challenges of fulfilling them are much different today.

On December 8, 1953, President Eisenhower, returning from his meeting with the leaders of Britain and France at the Bermuda Summit, flew directly to New York to address the United Nations (UN) General Assembly. His presentation, known afterward as the “Atoms for Peace” speech, was bold, broad, and visionary. Eisenhower highlighted dangers associated with the further spread of nuclear weapons and the end of the thermonuclear monopoly, but he also pointed to opportunities. Earlier that year, Stalin had died and the Korean War armistice was signed. Talks on reunifying Austria were about to begin. The speech sought East-West engagement and outlined a framework for reducing nuclear threats to security while enhancing the civilian benefits of nuclear technology. One specific proposal offered to place surplus military fissile material under the control of an “international atomic energy agency” to be used for peaceful purposes, especially economic development. Eisenhower clearly recognized the complex interrelationships between different nuclear technologies and the risks and the benefits that accrue from each. The widespread use of civilian nuclear technology and the absence of any use of a nuclear weapon during the half-century after his speech reflect the success of his approach.

Today, the world faces choices about nuclear technology that have their parallels in the Eisenhower calculus and its legacy. Although his specific proposal for the use of fissile material was never implemented, his broader themes gave impetus to agreements such as the nuclear Non-Proliferation Treaty (NPT) and to institutions such as the International Atomic Energy Agency (IAEA). The resulting governance process has promoted some nuclear technologies and restricted others. Perhaps even more influential was Eisenhower’s overarching recommendation that we try to reduce the risks and seek the benefits of nuclear technology. Whether seen as an effort to rebalance investment in a dual-use technology or as the foundation for a “bargain” between nuclear haves and have-nots, Eisenhower’s speech brought together concepts that furnished the theoretical underpinnings of the nuclear technology control regime that has governed for nearly 50 years. Some believe that Eisenhower’s basic concepts remain sound and provide a foundation for the future. Others believe that they were never sound and promulgated dangerous dual-use technology around the world. Many are still debating exactly what Eisenhower meant to say.

The post-Cold War world provides a new context for discussing nuclear technology. Emphasis on the thermonuclear “sword of Damocles” as a deterrent to the superpower use of nuclear weapons has nearly disappeared. Nuclear weapon stockpiles of the superpowers, which peaked under the Johnson and Brezhnev administrations, have been greatly reduced and continue to shrink. Nuclear weapons, once seen as the “cheap” substitute for conventional armaments, are now weapons of last resort, whose primary purpose is to deter others from using weapons of mass destruction (WMD) or to retaliate if they do. Today, however, growing regional competition raises the challenge of multipolar deterrence; and technology-empowered terrorism, against which retaliation is difficult, if not impossible, calls into question the effectiveness of deterrence itself. As Eisenhower spoke, only three nations possessed nuclear weapons, and each was a permanent member of the UN Security Council. Some 189 nations are today parties to the NPT, and four states have voluntarily given up their nuclear weapons. Seven nations have nuclear weapons. Israel and North Korea are believed to have them, and others appear to be pursuing them. The emergence of nuclear weapons in troubled regions such as the Middle East, South Asia, and the Korean Peninsula may make nuclear conflict more likely than during the Cold War, and the growing latency of a nuclear weapon capability increases concerns about weapons getting into the hands of “rogue states” or even substate actors or terrorists.

In 1953, when Eisenhower first touted the benefits of nuclear technology, nuclear power plants were still on the drawing board. Over the next two decades, hundreds of nuclear power reactors were either built or begun in over 40 countries. Concerns about economics, safety, and proliferation have now led to a near-cessation of new reactor construction, leaving future growth uncertain. Existing reactors will in many cases continue to operate for the next 50 years or so, but we cannot know whether the public and the market will accept new reactor designs or fuel-cycle technologies. Indeed, other applications of nuclear technology, such as in agriculture and medicine, which Eisenhower emphasized in his speech, have achieved greater public acceptance.

Much of the optimism about what Walt Disney popularized as “our friend, the atom” has disappeared in the face of the public’s deep-seated apprehension about all things radioactive. Limited stocks of fissile material that Eisenhower saw as a potentially valuable resource have now grown and become a huge overhang of nuclear materials and waste whose future use or disposition are highly uncertain despite programs for regional repositories, waste minimization, transmutation, or reuse as fuel. “Not in my backyard” (NIMBY) attitudes and near-zero tolerance for environmental risk have replaced the national sense of urgency that drove the application of nuclear technology in the 1950s. Lack of confidence in international institutions, national governments, and industry, as well as public skepticism about risk/benefit analyses, have frequently paralyzed change. Neither a consensus nor even a working plurality exists to address some important challenges and opportunities.

Existing nuclear reactors and legacy materials will keep the nuclear technology question on center stage for many decades to come, but progress is unlikely unless we develop a comprehensive long-term vision for the future of nuclear technology. In charting a path, we need to consider powerful forces such as climate change, rapidly developing technologies, and geoeconomic or strategic pressures. We can control many of these forces, but some transforming events may surprise us. Interest in nuclear technology could be stimulated by air quality concerns, economic growth in the developing world with large increases in energy demand, oil politics, technological advances in power plants, regulatory reform, successful waste management, or new medical and food applications. Or it could be discouraged by political gridlock over waste management, increased alarm about proliferation and terrorism, a major nuclear accident, NIMBY, progress in alternative energy technologies, or tighter environmental rules.

Benefits and risks

How likely it is that nuclear weapons might expand under various future political circumstances still depends on how widely nuclear weapons technology diffuses. About 75 countries have, had, or will soon have nuclear reactors (for power or research). In October 2003, IAEA Director General Mohamed ElBaradei expressed his concern that the “margin of safety” was becoming too small and said that we live in a world with “35 to 45 countries in the know.” To illustrate the scope of peaceful nuclear materials activity, he noted, “50 countries have spent fuel stored in temporary sites.”

The wide prevalence of nuclear activities is further complicated by the international movement of knowledge and materials. The transfer of key technology, materials, and services takes place at many levels of sophistication and through many channels, including gray and black markets. Dual-use equipment and facilities, and especially components, are now commodities, too ubiquitous for export controls or site monitoring. An incremental accumulation of capabilities and “just-in-time” production of components or weapons makes decisive reaction even more difficult. Parallel tracks of confrontation and engagement and divergent histories of relations among nations complicate the development of an international and domestic consensus on enforcement. On the demand side, regional military calculations welded to domestic political aspirations are difficult to address. In the case of ethnic and religious extremism and suicide mentalities, governments have difficulty in even understanding how violent specific individuals or groups may become, or how indigenous populations will react to such violence.

In the face of these new threats, President Bush called in February of this year for tougher controls on nuclear fuel production, expansion of the Nunn-Lugar program to secure Russia’s nuclear materials and technology, and an expansion of the Proliferation Security Initiative, which aims to intercept unconventional weapons and materials through a coalition of the willing, as opposed to a formal treaty. He also proposed bolstering the organization of the IAEA to focus on safeguards, and limiting the spread of enrichment and reprocessing facilities to those now possessing them.

Clearly, the future of civilian nuclear technology is linked to the future of international and domestic security. Indeed, nuclear power may contribute to policy objectives such as defense, nonproliferation, energy security, and protecting the environment. These contributions, however, are significantly less compelling if nuclear power is not economically viable.

Can nuclear power advocates successfully go beyond mitigating risks to make the case that security is positively enhanced by nuclear power or that nuclear power is at least neutral in this regard? The fundamental link is between prosperity and security, not only for the Western democracies but also for countries of concern in the developing world. There, the benefits of power must be perceived as being of greater value than weapons, a questionable proposition in some of the key countries of concern, particularly in the oil-rich regions. Many advocates of nuclear power hope that a dual-track approach combining aggressive nonproliferation and disarmament can increase support for building more reactors. Some believe that government and international ownership of civilian facilities, in addition to increasing security, may give nuclear power a better image with opponents, especially in this age of proliferation and terrorism. Balancing the various desires of the many participants in the debate will not be simple.

Some envision a new “grand bargain” that brings the nonmembers of the NPT into the regime in exchange for their implementing tight export controls. Yet the problem has been the ability to enforce existing export controls and commitments of states already party to the NPT. Furthermore, bringing these additional weapon states into the regime may drive other countries out so as to reach the same bargain. If India and Pakistan were allowed to join the NPT and keep their weapons, why can’t Iran or Brazil or others be allowed to acquire nuclear weapons and expect nuclear cooperation? Some look to the fulfillment of the NPT Article VI goal of nuclear disarmament in order to gain greater acceptance of the peaceful applications of nuclear technology. Yet past reductions have neither prevented horizontal proliferation nor eliminated the motivations of terrorists.

New directions

Alternative futures for nuclear technology are possible, and the most likely outcomes are not obvious. We confront a legacy of large nuclear weapon stockpiles, huge civilian and military fissile material inventories, large and growing quantities of nuclear waste, and a level of public skepticism that is not reassuring to those who advocate more civilian use of power.

The futures of civilian and military use suffer from fragmented visions. The medical community avoids the term nuclear, and the power industry tends to trivialize the connection to proliferation. The public is left confused without a comprehensive picture of the risks and benefits, and the new and uncharted reality of terrorism further clouds the risk picture.

Security is an overriding issue for all of technology, but especially nuclear technology. Without reasonable assurances of security, there can be little confidence in nuclear technology and therefore at best suboptimal use of this technology for either civilian or defense purposes. The rising specter of WMD terrorism accompanies a growing interest in nuclear power to protect the environment and provide more geopolitically secure sources of energy. Concern over terrorism even permeates consideration of the growing field of nuclear medicine, with its improved and successful treatments for cancer and other diseases. Potential nuclear proliferation through violations of the NPT, or through the withdrawal by law-abiding states that wish to join with the nuclear weapon states and the nuclear weapon-possessing states outside the NPT, may significantly reshape the international security environment.

Effective security will require vision and action in at least two areas:

Reducing the incentive for countries to acquire nuclear weapons. Dealing with the fundamental security and political motivations for proliferation needs more explicit attention of the sort that we have given to supply-side restraints. Particular emphasis should be placed on improving security conditions and guarantees.

Strengthening the effectiveness and enforcement of the NPT regime. Support for the NPT is strong, but there are serious divisions about the treaty regime’s ability to address the emerging challenges of spreading technology. Central to the debate over management of the nuclear future is the question of which principles or rules should be applied universally and which should be tailored to specific circumstances or time frames. How NPT parties should relate to nonparties remains an issue, involving what benefits come from being a party and what responsibilities for restraint accrue from not being a party. Possible actions to improve the status quo include expanding the IAEA’s mandate beyond monitoring and verification into more active oversight of management and control of materials and facilities, enhanced export controls, and the use of the most up-to-date technology for safeguards and security.

Ultimately, progress will depend on a more informed public. This should begin with building public confidence through comprehensive risk/benefit assessments. The marketplace will primarily determine the extent of civilian applications, and governments will mostly determine future applications for defense purposes. In neither case, however, does a single group control decisionmaking, which will be driven by increasingly complex factors. Society needs a comprehensive analysis of risks and benefits in terms of the entire nuclear technology system. This is what President Eisenhower began in his Atoms for Peace speech. Today, this requires a more thorough and explicit assessment of the dangers of proliferation and terrorism and a better understanding of the cooperative roles that must be played by industry and government.

More and better dialogue and engagement with the public about nuclear technologies and about security and civilian benefits and risks, including radiological terrorism, will help clarify the actual versus the perceived risks. But the problem will not be resolved until the public has greater trust that the nuclear industry and government regulatory procedures are giving safety and security greater weight in their decisions.


Robert N. Schock and Neil Joeck are senior fellows at the Center for Global Security Research (CGSR) at Lawrence Livermore National Laboratory (http://cgsr.llnl.gov) in Livermore, California. Ronald F. Lehman is the director and Eileen S. Vergino is the deputy director of CGSR.

The Hope for Hydrogen

We should embrace hydrogen largely because of the absence of a more compelling long-term option.

The history of alternative transportation fuels is largely a history of failures. Methanol never progressed beyond its use in test fleets, despite support from President George H. W. Bush. Compressed natural gas remains a niche fuel. And nearly every major automotive company in the world has abandoned battery-electric vehicles. Only ethanol made from corn is gaining market share in the United States, largely because of federal and state subsidies and a federal mandate. Some alternatives have succeeded elsewhere for limited times, but always because of substantial subsidies and/or government protection.

Is hydrogen different? Why do senior executives of Shell, BP, General Motors, Toyota, DaimlerChrysler, Ford, and Honda tout hydrogen, and why do Presidents George Bush and Romano Prodi of the European Union and California Governor Arnold Schwarzenegger all advocate major hydrogen initiatives? Might hydrogen succeed on a grand scale, where other alternative fuels have not?

Hydrogen clearly provides the potential for huge energy and environmental improvements. But skeptics abound, for many good reasons. Academics question near-term environmental benefits, and activists and environmental groups question the social, environmental, and political implications of what they call “black” hydrogen (because it would be produced from coal and nuclear power). Others say we are picking the wrong horse. Paul MacCready argues in the forthcoming book of essays The Hydrogen Energy Transition that improved battery technology will trump hydrogen and fuel cell vehicles. And many, including John DeCicco of Environmental Defense, also in The Hydrogen Energy Transition, argue that the hydrogen transition is premature at best. A February 2004 report on hydrogen by the National Academies’ National Academy of Engineering and National Research Council agrees, asserting that there are many questions to answer and many barriers to overcome before hydrogen’s potential can be realized.

What is remarkable in the early stages of the debate is the source of public opposition: It is not coming from car or oil companies but primarily from those most concerned about environmental and energy threats. The core concern, as Joseph J. Romm argues so well in the preceding article, is that, “a major effort to introduce hydrogen cars before 2030 would actually undermine efforts to reduce emissions of heat-trapping greenhouse gases such as CO2.”

In fact, the hydrogen debate is being sucked into the larger debate over President Bush’s environmental record. The environmental community fears that the promise of hydrogen is being used to camouflage eviscerated and stalled regulations and that it will crowd out R&D for deserving near-term energy efficiency and renewable energy opportunities. What the administration and others portray as a progressive long-term strategy, others see as bait and switch. Indeed, a backlash is building against what many see as hydrogen hype.

Perhaps this skepticism is correct. Perhaps it is true that without a hydrogen initiative, government leaders would pursue more aggressive fuel economy standards and larger investments in renewable energy. We remain skeptical. And even if true, what about the larger question of the size of the public R&D energy pie? If energy efficiency and climate change are important public issues, then quibbling over tens of millions of dollars in the U.S. Department of Energy budget is missing the point. It should not be seen as a zero sum game. If energy efficiency and climate change are compelling initiatives, then shouldn’t the debate really be over the size of the budget?

In any case, we believe there is a different story to tell. First, hydrogen must be pursued as part of a long-term strategy. (Indeed, any coherent energy strategy should have a long-term component.) Second, hydrogen policy must complement and build on near-term policies aimed at energy efficiency, greenhouse gas reduction, and enhanced renewable energy investments. Hydrogen vehicles will not happen without those policies in place. In fact, hybrid vehicles are an essential step in the technological transition to fuel cells and hydrogen. And third, if not hydrogen, then what? No other long-term option approaches the breadth and magnitude of hydrogen’s public benefits.

The lessons of history

All previous alternative transportation fuels ultimately failed, largely for two reasons: They provided no private benefits, and claims of large public benefits regarding pollution and energy security proved to be overstated. The private benefits from compressed natural gas, ethanol, methanol, propane, and early battery-electric vehicles were nil. When compared to petroleum-fueled vehicles, all have shorter distances between refueling and different safety and performance attributes, often perceived as inferior. The only clear benefits are emissions and energy security, but few consumers purchase a vehicle for public-good reasons.

Overstated claims for new fuels were not intentionally deceptive. Rather they reflected a poor understanding of energy and environmental innovation and policy. Two errors stand out: understated forecasts of oil supply and gasoline quality and overstated environmental and economic benefits of alternative fuels. Oil turned out to be cheap and abundant, thanks to improved technologies for finding and extracting oil; gasoline and diesel fuel were reformulated to be cleaner; and internal combustion engines continued to improve and now emit nearly no harmful air pollutants.

What do these lessons imply for hydrogen? First, hydrogen is unlikely to succeed on the basis of environmental and energy advantages alone, at least in the near to medium term. Hydrogen will find it difficult to compete with the century-long investment in petroleum fuels and the internal combustion engine. Hybrid electric vehicles, cleaner combustion engines, and cleaner fuels will provide almost as much energy and environmental benefit on a per-vehicle basis for some time. During the next decade or so, advanced gasoline and diesel vehicles will be more widespread and deliver more benefits sooner than hydrogen and fuel cells ever could. Hydrogen is neither the easiest nor the cheapest way to gain large near- and medium-term air pollution, greenhouse gas, or oil reduction benefits.

What about the long term? Although incremental enhancements are far from exhausted, there is almost no hope that oil or carbon dioxide (CO2) reduction improvements in vehicles could even offset increases in vehicle usage, never mind achieve the radical decarbonization and petroleum reductions likely needed later this century.

The case for hydrogen

The case for hydrogen is threefold. First, hydrogen fuel cell vehicles appear to be a superior consumer product desired by the automotive industry. Second, as indicated by the National Academies’ study, the potential exists for dramatic reductions in the cost of hydrogen production, distribution, and use. And third, hydrogen provides the potential for zero tailpipe pollution, near-zero well-to-wheels emissions of greenhouse gases, and the elimination of oil imports, simultaneously addressing the most vexing challenges facing the fuels sector, well beyond what could be achieved with hybrid vehicles and energy efficiency.

The future of hydrogen is linked to the automotive industry’s embrace of fuel cells. The industry, or at least an important slice of it, sees fuel cells as its inevitable and desired future. This was not true for any previous alternative fuel. The National Academies’ report highlights the attractions of fuel cell vehicles. It notes that not only are fuel cells superior environmentally, but they also provide extra value to customers. They have the potential to provide most of the benefits of battery-electric vehicles without the short range and long recharge time. They offer quiet operation, rapid acceleration from a standstill because of the torque characteristics of electric motors, and potentially low maintenance requirements. They can provide remote electrical power–for construction sites and recreational uses, for example–and even act as distributed electricity generators when parked at homes and offices. Importantly, they also have additional attractions for automakers. By eliminating most mechanical and hydraulic subsystems, they provide greater design flexibility and the potential for using fewer vehicle platforms, which allow more efficient manufacturing approaches. Fuel cells are a logical extension of the technological pathway automakers are already following and would allow a superior consumer product–if fuel cell costs become competitive and if hydrogen fuel can be made widely available at a reasonable cost.

Hydrogen’s future appears to be tightly linked to automaker commitments to move fuel cells from the lab to the marketplace.

Those two “ifs” remain unresolved and are central to the hydrogen debate. Fuel cell costs are on a steep downward slope and are now perhaps a factor of 10 to 20 too high. Huge amounts of engineering are still needed to improve manufacturability, ensure long life and reliability, and enable operation at extreme temperatures. Although some engineers believe that entirely new fuel cell architectures are needed to achieve the last 10-fold cost reduction, a handful of automotive companies seem convinced that they are on track to achieve those necessary cost reductions and performance enhancements. Indeed, massive R&D investments are taking place at most of the major automakers.

The second “if” is hydrogen availability, which is perhaps the greatest challenge of all. The problem is not production cost or sufficient resources. Hydrogen is already produced from natural gas and petroleum at costs similar to those of gasoline (adjusting for fuel cells’ higher efficiency). With continuing R&D investment, the cost of providing hydrogen from a variety of abundant fossil and renewable sources should prove to be not much greater than that of providing gasoline, according to the National Academies’ study.

The key supply challenges are as follows. First is the need for flexibility. There are many possible paths for making and delivering hydrogen, and it is difficult at this time to know which will prevail. Second, because private investment will naturally gravitate toward conventional fossil energy sources, currently the lowest-cost way to make hydrogen, government needs to accelerate R&D of zero-emission hydrogen production methods. Renewable hydrogen production is a key area for focused R&D. CO2 sequestration–a prerequisite if abundant coal in the United States, China, and elsewhere is to be used–is another possible path to very-low-emission hydrogen. Although the cost of capturing carbon from large fossil fuel plants and sequestering it is not prohibitive in a large range of locations and situations, CO2 sequestration faces uncertain public acceptance. Will CO2 be perceived in the same light as nuclear waste, leading to permitting delays and extra costs?

The third supply-related challenge is logistical in nature. How can hydrogen be provided at local refueling sites, offering both convenience and acceptable cost to consumers during a transition? Today’s natural gas and petroleum distribution systems are not necessarily good models for future hydrogen distribution, especially in the early stages of hydrogen use when consumption is small and dispersed. If future hydrogen systems attempt to simply mimic today’s energy systems from the beginning, distribution costs could be untenably large, and the hydrogen economy will be stillborn. Unlike liquid transportation fuels, hydrogen storage, delivery, and refueling are major cost contributors. Astoundingly, delivering hydrogen from large plants to dispersed small hydrogen users is now roughly five times more expensive than producing the hydrogen. Even for major fossil fuel-based hydrogen production facilities under study, distribution and delivery costs are estimated to be equal to production costs.

Clearly, a creative, evolutionary approach is needed, eventually leading to a system that serves both stationary and mobile users, relies on small as well as large hydrogen production facilities, accesses a wide variety of energy feedstocks, incorporates CO2 capture and sequestration, and is geographically diverse. In the very early stages of a transition, hydrogen might be delivered by truck from a central plant serving chemical uses as well as vehicles or be produced at refueling sites from natural gas or electricity. Distributed generation will be a key part of the solution, with production near or at the end-use site. The National Academies’ report argues that the hydrogen economy will initially and perhaps for a very long time be based on distributed generation of hydrogen. (Honda and General Motors propose placing small hydrogen refueling appliances at residences.) Other innovative solutions would be needed, especially during the early phases. In cities with dense populations, pipelines would probably become the lowest-cost delivery option, once a sizeable fraction of vehicles run on hydrogen. The transportation fuel and electricity and chemical industries might become more closely coupled, because the economics can sometimes be improved by coproduction of electricity, hydrogen, and chemical products. Transitions would proceed in different ways, depending on regional resources and geographic factors.

No natural enemies

Although the challenges are daunting, perhaps the most important factor is the absence of natural political or economic enemies. For starters, hydrogen is highly inclusive, capable of being made from virtually any energy feedstock, including coal, nuclear, natural gas, biomass, wind, and solar.

The oil industry is key. It effectively opposed battery-electric vehicles, because companies saw no business case for themselves. Hydrogen is different. Oil companies are in actuality massive energy companies. They are prepared to supply any liquid or gaseous fuel consumers might desire, although of course they prefer a slow transition that allows them to protect their current investments. Most, for instance, prefer that initial fuel cell vehicles carry reformers to convert gasoline into hydrogen. They have been disappointed that all major car companies are now focused strictly on delivered hydrogen.

Oil companies will not allow the hydrogen economy to develop without them. Indeed, some have played key roles in promoting hydrogen, and many are active participants in hydrogen-refueling demonstration projects around the world. But oil companies would not realize a rapid payoff from being the first to market. Rather, they anticipate large financial losses that would be stanched only when hydrogen use became widespread. Without government support during the low-volume transition stage, oil companies are unlikely to be early investors in the construction of hydrogen fuel stations. They are best characterized as watchful, strategically positioning themselves to play a large role if and when hydrogen takes off.

Automakers see a different business reality. They see benefits from being first to market. They see hydrogen fuel cells as the desirable next step in the technological evolution of vehicles. Hydrogen’s future appears to be tightly linked to automaker commitments to move fuel cells from the lab to the marketplace. The key question is whether and when they will ratchet up current investments of perhaps $150 million per year (in the case of the more aggressive automakers) to the much larger sums needed to tool factories and launch commercial products. Without automaker leadership, the transition will be slow, building on small entrepreneurial investments in niche opportunities, such as fuel cells in off-road industrial equipment, hydrogen blends in natural gas buses, innovative low-cost delivery of hydrogen to small users, and small energy stations simultaneously powering remote buildings and vehicle fleets.

If not hydrogen, then what?

What are the alternatives to hydrogen? The only other serious long-term alternatives for fueling the transport sector are grid-supplied electricity and biomass. Electricity is quite appealing on environmental and energy grounds. It allows for many of the same benefits as hydrogen: accessing renewable and other feedstocks and zero vehicular emissions. But every major automaker has abandoned its battery-electric vehicle program, except for DaimlerChrysler’s small factory in North Dakota producing the GEM neighborhood vehicle. For battery-electric vehicles to be viable, several-fold improvements in batteries or other electricity storage devices would be required, or massive investments would be needed in “third rail” electricity infrastructure that would require substantial added cost for vehicles. These massive improvements are unlikely. Continued battery improvements are likely, but after a century of intense research, there still remains no compelling proposal that might reduce material costs sufficiently to render batteries competitive with internal combustion engines. The same is not true of fuel cells.

The other long-term proposal is biomass. Cellulosic materials, including trees and grasses, would be grown on the vast land areas of the United States and converted into ethanol or methanol fuel for use in combustion engines. Although this energy option is renewable, the environmental effects of intensive farming are not trivial, and the land areas involved are massive. Moreover, there are few other regions in the world available for extensive energy farming.

Other options include fossil-based synthetic fuels, in which shale oil, oil sands, coal, and other abundant materials are converted into petroleum-like fuels and then burned in combustion engines or converted into hydrogen at fuel stations or on board vehicles for use in fuel cells. But with all these options, carbon capture at the site is more difficult than with coal-to-hydrogen options, CO2 volumes would be massive, and the overall energy efficiency would be far inferior.

We conclude that hydrogen merits strong support, if only because of the absence of a more compelling long-term option.

Hydrogen’s precarious future

The transition to a hydrogen economy will be neither easy nor straightforward. Like all previous alternatives, it faces daunting challenges. But hydrogen is different. It accesses a broad array of energy resources, potentially provides broader and deeper societal benefits than any other option, potentially provides large private benefits, has no natural political or economic enemies, and has a strong industrial proponent in the automotive industry.

In the end, though, the hydrogen situation is precarious. Beyond a few car companies and a scattering of entrepreneurs, academics, and environmental advocates, support for hydrogen is thin. Although many rail against the hydrogen hype, the greater concern perhaps should be the fragile support for hydrogen. Politics aside, we applaud the United States, California, and others for starting down a path toward a sustainable future. Although we do not know when or even if the hydrogen economy will eventually dominate, we do believe that starting down this path is good strategy.

The key is enhanced science and technology investments, both public and private, and a policy environment that encourages those investments. Fuel cells and hydrogen provide a good marker to use in formulating policy and gaining public support. Of course, policy should remain focused on near-term opportunities. But good near-term policy, such as improving fuel economy, is also good long-term policy. It sends signals to businesses and customers that guide them toward investments and market decisions that are beneficial to society. It appears to us that hydrogen is a highly promising option that we should nurture as part of a broader science, technology, and policy initiative. The question is how, not if.

Recommended reading

National Research Council and National Academy of Engineering, The Hydrogen Economy: Opportunities, Costs, Barriers, and R&D Needs. (Washington, D.C.: The National Academies Press, 2004) (available online at www.nap.edu).

Daniel Sperling and James S. Cannon, The Hydrogen Energy Transition: Moving Toward the Post-Petroleum Age in Transportation. (St. Louis: Elsevier, 2004).


Daniel Sperling ([email protected]) is director of the Institute of Transportation Studies and a professor of engineering and environmental policy at the University of California at Davis. Joan Ogden ([email protected]) is associate professor of environmental science and policy at the University of California at Davis.

The Nuclear Power Bargain

The potential benefits are enormous if we can continue to make progress on safety, environmental, fuel supply, and proliferation concerns.

President Dwight D. Eisenhower electrified the United Nations (UN) General Assembly with his vision that “the fearful trend of atomic military buildup can be reversed, this greatest destructive force can be developed into a great boon for the benefit of all mankind . . . to serve the peaceful pursuits of mankind . . . [in] electrical energy, agriculture, medicine, and other peaceful activities.” He further proposed to “allocate fissionable material [for peaceful uses] from a bank under international atomic energy agency control [and] . . . provide special safe conditions under which such a bank of fissionable material can be made essentially immune to surprise seizure.” Although the “bank” never eventuated, the Nuclear Non-Proliferation Treaty (NPT) and the International Atomic Energy Agency (IAEA) were instituted to apply the controls associated with a new “bargain”: Nations forgoing nuclear weapons development would be given the peaceful benefits of nuclear technology.

The initiatives stemming from Eisenhower’s 1953 address helped quite literally to electrify the world. Today, 441 nuclear power plants provide 16 percent of the world’s electricity. After years of intensive technical and institutional development to correct early problems, these plants are now operating safely and, on average, with high reliability and competitive costs. Many countries depend critically on nuclear power. Among the 10 countries that rely on it most heavily (Lithuania, France, Belgium, Slovakia, Bulgaria, Ukraine, Sweden, Slovenia, Armenia, and Switzerland), nuclear power provides some 40 to 80 percent of each nation’s electricity. Not far behind are the Republic of Korea (38 percent) and Japan (35 percent). The United States, at 20 percent, ranks 19th but generates more electricity from nuclear plants than any other country, and six of its states derive 50 percent or more of their electricity from nuclear power. As licenses of existing U.S. plants are being extended by 20 years, and as similar actions are taken overseas, continued usage at present levels through mid-century seems assured.

What is less clear is whether nuclear power capacity will actually expand during that period. Certainly the potential is there. Major growth in primary energy production will be needed to serve a global population that could reach 9 or 10 billion by 2100. Electricity demand is projected to grow by 480 percent in a high economic scenario and by up to 140 percent in an ecologically driven scenario governed by conservation and the reduction of greenhouse gas emissions. Given those looming needs, it seems logical to predict a widening role for a source of economical combustion-free energy that does not generate greenhouse gas or air pollution emissions and that uses a fuel supply that is sustainable over the long haul.

But expansion of nuclear power has reached a virtual standstill. In the United States, no orders have been placed for nuclear power plants in more than two decades. Worldwide, only 32 nuclear power plants are under construction, most of them in India and China. From the mid-1980s until recently, R&D budgets for civilian power had been steadily declining in most of the industrialized countries, with the exception of Japan and France. The downturn is largely a result of slower growth in electricity demand and an abundance of natural gas at low prices. Under those conditions, gas-fired plants have grown more economical for expanding capacity. But history also plays a role. The legacy of earlier problems, including the high-profile accidents at Three Mile Island and Chernobyl, remains in the form of continued public skepticism about the safety of nuclear power and its radioactive wastes. Those concerns are amplified by a general fear of radiation and the specter of the atom bomb. In response, Sweden, Italy, and Germany have imposed moratoriums on nuclear power.

To contribute significantly to global energy demand, the nuclear power industry must earn public confidence by maintaining an excellent safety record. But success in the marketplace depends on economic factors: the capital cost of new plants and the operating and maintenance costs of existing and new plants. These costs are strongly influenced by safety, reliability, environmental considerations (global climate change, regional air pollution, and waste disposition), and the adequacy and stability of fuel supply. Research, development, and demonstration (RD&D), for both the near and long terms, are necessary to meet this total cost challenge, as well as to achieve advanced system performance. Nuclear plants’ resistance to proliferation must also be addressed. Revelations that some countries have developed weapons capabilities clandestinely, using nuclear power development as a cover, point up serious weaknesses in the international proliferation control system.

All of these issues are being dealt with to varying degrees, but considerably more progress will be needed before Eisenhower’s vision for peaceful uses of nuclear energy can be fully realized.

Growing pains

In its early decades, nuclear power became a victim of its own success. It grew as an energy source at about three times the rate of previous new sources of electricity generation. Partly because of that rapid expansion, a series of problems emerged. U.S. plant reliability deteriorated: The average capacity factor (the ratio of energy produced to the amount of energy that could have been generated at continuous full-power operation) fell to 60 percent versus the 80 percent expected. Because of a lack of timely and in-depth planning for the disposition of radioactive waste, efforts to develop a high-level waste repository were making little progress. The safety regulatory base was immature. As nuclear power developed, contractors faced major delays in gaining construction permits and were forced to undertake substantial retrofitting of plants under construction and already completed.

Then in 1979, the Three Mile Island accident occurred, partially melting that plant’s fuel and causing multibillion dollar losses in the plant investment and in the cost of cleanup and decommissioning. Because the plant was enclosed in a reinforced concrete “containment” to keep radiation from escaping, neither the public nor the plant operators were harmed. But many design, operational, and maintenance deficiencies were revealed that required years of technical and management remediation and significantly increased safety regulatory requirements.

The development of more rigorous operational standards since the Three Mile Island accident has had a salutary effect on the nuclear power industry. The Institute for Nuclear Power Operations was formed in the United States to establish standards of operational excellence and to monitor compliance with those standards by all U.S. commercial nuclear power plants. Later, in the wake of the lethal accident of the uncontained Chernobyl nuclear plant in Ukraine, this concept was expanded internationally with the formation of the World Association of Nuclear Operators.

These reforms have led to excellent safety and reliability records. U.S. plants posted average capacity factors of 91.5 percent in 2001, 91.7 percent in 2002, and 89.4 percent in 2003. The increased average capacity factor since 1992 is roughly equivalent to 13 new 1,000-megawatt (MW) plants. Parallel improvements were achieved worldwide, though with less difficulty than in the United States. In Western Europe and Asia, rapid expansion was made possible primarily by technology transfer of light water reactor (LWR) technology from the United States. Those plants proved more reliable initially than the older technology that produces the bulk of U.S. nuclear power, in part because they were deployed somewhat later and benefited from the early U.S. experience. Worldwide, nuclear plants in 2003 achieved an average capacity factor of 80 percent and 87.3 percent average availability (that is, ready to provide power but not called on by the grid).

Near-term prospects

Thanks to improvements derived from operational experience and innovative reactor technologies, prospects have recently been enhanced for deploying new nuclear plants in the near term in the United States, Europe, and Asia that will be even safer and more reliable. Advanced light water reactors (ALWRs) have been developed in a program managed by the Electric Power Research Institute (EPRI) and cost-shared by the U.S. Department of Energy (DOE), U.S. reactor manufacturers, and utilities in the United States, Europe, and Asia.

The market does not currently reward nuclear power’s environmental benefits.

ALWRs in the power range of 1,000 to 1,200 MW have been developed that derive their improved design and operational features from extensive worldwide licensing and operating experience with LWR systems. A 600-MW ALWR incorporating innovative passive (gravity and pressurized gas) emergency core and containment cooling systems has also been developed. These passive systems replace the electrically or steam-powered pumping systems used in the conventional plants, resulting in a simpler and less costly design.

Four 1,350-MW ALWRs of the boiling water type (ABWRs), designed jointly by General Electric (GE) and Hitachi/Toshiba, have already been built in Japan. Two more are under construction in Taiwan. South Korea is also building four Westinghouse 1000-MW ALWR plants of the pressurized water type (APWR).

All of these designs have been certified by the U.S. Nuclear Regulatory Commission (NRC). The NRC has also certified a 600-MW passively cooled APWR, the Westinghouse AP-600, after extensive tests of its passive cooling features. China has continued to expand its nuclear power capacity, and is presently building two more 1,000-MW APWRs under French contracts. Finland has awarded a contract to Framatome/ Siemens to build a 1,600-MW APWR. France is nearing a decision on whether to authorize a 1,600-MW plant of the same design.

Because of their relatively high capital cost, these plants do not yet compete economically with fossil power, at least in the United States. Consequently, efforts are under way to further reduce their capital cost. Westinghouse has developed the AP-1000, a 1,000-MW version of its AP-600 that could reach economic competitiveness through economy of scale. It is now being reviewed for an NRC design certification. GE is developing a 1,350-MW passively cooled ABWR, the ESBWR, with similar economic promise, and has applied for NRC design certification.

A significant increase in the price of natural gas could make new nuclear plants economically competitive even without further reductions in their capital costs. The competitive position of the combined-cycle gas-fired turbine (CCGT) power plant, the type most favored for new generation capacity over the past two decades, is highly sensitive to the price of gas. For most of this period, gas prices have been in the range of $3 to $4 per million British thermal units (MMBTU). At those rates, the overnight capital cost (the cost excluding interest on capital) of a new nuclear plant would need to be in the range of $1,000 per kilowatt (kW) to be competitive, which is the cost goal of the AP-1000 and the ESBWR. But if gas remains at its current price of $5 to $6 per MMBTU, a competitive nuclear plant overnight capital cost could be as high as $1,300 to $1,400 per kW, the present estimate for the conventionally cooled ABWR.

These cost comparisons focus on gas-fired plants because the CCGT has been the technology of choice for new capacity. If gas prices remain high, coal-fired plants could become the prime competitor with nuclear plants. In that case, nuclear power might prevail, partly because the present cost gap is smaller and partly because of another important part of the energy equation: environmental costs.

The environmental costs of nuclear power are internalized; that is, they are largely included in the cost of construction, operation, and insurance and are added to the price of electricity. That is not the case with fossil fuel plants. The market does not currently reward nuclear power’s environmental benefits nor have the environmental costs from fossil fuel plants been fully internalized. And yet nuclear power has a clear environmental edge, helping to lower average emissions from the power industry overall. Between 1973 and 2001, U.S. power plants emitted 70.3 million fewer tons of sulfur dioxide, 35.6 million fewer tons of nitrogen oxides, and 2.97 billion fewer tons of carbon dioxide than if nuclear power had not been part of the energy mix. Without major deployment of nuclear energy and noncombustible renewables, the world’s total carbon dioxide emissions from power generation are expected to grow from 23 billion tons in 1990 to 40 billion tons in 2020. For the time being, the avoidance of greenhouse gas emissions through nuclear power has not been recognized in the Clean Development Mechanism of the UN Framework Convention on Climate Change as one of the methods allowed for achieving the required reduction. Nor are nuclear plants eligible for emissions trading to gain financial credit for their contribution to reduced air pollution and greenhouse gas emissions.

But that could change. If the costs of greenhouse gas emissions from fossil fuel plants are internalized (say, if the plants are required to build carbon separation and sequestration systems or to pay a carbon tax) or if emissions trading is granted to nuclear plants, the economic tables would be turned. Add to that the financial risk arising from the greater fuel supply and cost instabilities of fossil fuel plants, and it becomes apparent that nuclear power might be on the threshold of achieving economic competitiveness.

Another issue that must be cleared up to allow a sustained expansion of nuclear power is the disposition of spent fuel, virtually all of which is currently stored at the nuclear power plant sites. Progress is being made, albeit slowly, toward the implementation of permanent repositories. In the United States, Congress has authorized DOE to proceed with the licensing of a permanent repository at Yucca Mountain in Nevada. The site is proposed for the disposition of some 70,000 tons of used fuel, which is sufficient for the 40,000-plus tons produced to date and for some 20 years to come. The authorization was based on more than 10 years of intensive R&D and engineering studies. If a construction license is granted, DOE will begin construction in early 2008. Before completion, DOE will update its application for a license to receive and possess waste, as required by NRC regulations. If that license is granted, waste could begin arriving as early as 2010.

Other countries are also making progress in radioactive waste management. Sweden has put into operation an efficient repository of adequate capacity for its low-level nuclear plant wastes and has begun the design and licensing of an intermediate-level waste repository. Finland has adequate storage capacity for its low-level wastes, and a spent-fuel repository is being designed and licensed. France has decided to build two underground laboratories for research on spent-fuel disposition, one in clay and one in granite. Most other countries are at an earlier stage.

The security of nuclear facilities against attack has been addressed urgently ever since 9/11. Initial evaluations suggest that nuclear power plants with containment (all except some in Russia), fuel storage facilities, and transport casks are robust against such attack. Nevertheless, plant security has been substantially bolstered. The NRC is expanding safety regulations to include the possibility of attacks on nuclear plants, both by increasing security requirements and by defining a “design basis threat” on which every nuclear plant must be evaluated. Other nations are making similar evaluations.

One obstacle to expanded nuclear power is licensing uncertainty. In the United States, changes in licensing requirements after the start of construction and delays in getting the operating permit after completion have in the past greatly increased capital costs and construction time. To cope with this problem, the NRC established a licensing standardization policy that allows a reactor manufacturer to seek a site-independent design certification and a prospective plant owner/operator to obtain a separate early site permit. With a certified design and an early site permit, a combined construction and operating permit can be obtained before any money is invested in plant equipment and construction.

In light of all these developments, the prospects for recommencing new construction in the United States are fairly strong. Congress has authorized a joint cost-shared DOE/industry program called the Nuclear Power 2010 Initiative, which aims to begin building new nuclear plants in the United States around 2010. The planning framework is contained in DOE’s Near Term Deployment Roadmap. First priority is being given to resolving critical issues such as competitive costs and to defining the private-sector financing mechanisms.

New uses

Near-term deployment of new nuclear plants will strengthen the resource and skill base in the nuclear industry, providing a foundation on which more advanced designs and a broader scope of power applications can be developed. Nuclear energy is presented with four major future opportunities, each requiring major long-term RD&D:

  • Expanding the end uses of nuclear electricity for tasks such as powering electrical vehicles and providing high-temperature heat for industrial processes.
  • Developing economical hydrogen fuel production and desalination using nuclear energy to provide inexpensive bulk power.
  • Building nuclear plants that run on reprocessed spent fuel, which will ensure that the fuel supply will be adequate for centuries.
  • Developing economical small-output nuclear plants that could provide the benefits of nuclear power to smaller and less developed countries.
Nuclear plants should incorporate improved design features that render them inherently more resistant to proliferation.

DOE has launched a pair of efforts–the Generation IV Program and the Advanced Fuel Cycle Initiative (AFCI)–to carry out the RD&D to realize those four opportunities while achieving economic competitiveness, high standards of safety and proliferation resistance, and effective waste management. The Generation IV Program has chosen for initial study six different reactor concepts for development: gas-cooled, sodium-cooled, lead-cooled, molten salt-cooled, supercritical water-cooled, and very-high-temperature gas-cooled. All would operate at high temperatures to achieve greater efficiency. The very-high-temperature gas-cooled reactor has the potential to be an efficient hydrogen producer to provide fuel for the transportation sector so as to reduce dependence on offshore oil.

International cooperation is being fostered though the Generation IV International Forum, which includes representatives from 10 countries (Argentina, Brazil, Canada, France, Japan, the Republic of South Africa, the Republic of Korea, Switzerland, the United Kingdom, and the United States), and through the IAEA’s advanced reactor development program (INPRO).

An expanded long-term reliance on nuclear power is possible only if uranium supplies are adequate. Assuming a modest growth rate for nuclear power of 2 percent per year until 2050, and assuming continued operation without fuel recycling, annual uranium requirements would grow by a factor of about three, to roughly 200,000 tons. The cumulative uranium requirement from now to 2050 would exceed 5 million tons. The IAEA and the Organisation for Economic Co-Operation and Development estimate that some 4 million tons of uranium would be available at costs of up to $130 per kilogram (about twice current prices), resulting in a deficit of roughly 1 million tons of natural uranium by 2050. A major goal of the AFCI is to close this gap by developing proliferation-resistant fuel recycling for one or more of the Generation IV concepts. Success in these technologies could expand nuclear fuel resources a hundredfold.

A variety of fuel cycles are under consideration, including plutonium and thorium recycling in conventional LWRs and in advanced fast-spectrum reactors. Advanced aqueous and innovative pyrometallurgical reprocessing options are being pursued. Increased nuclear fuel resources are achieved by producing more plutonium during operation and thus creating more fuel than is burned. Alternatively, thorium fuels can be used to produce fissionable uranium-233. The goal for all variants is to retain the actinides in the reprocessed fuel so as to eliminate the potential for diversion of fissionable material from the waste stream and to minimize its long-lived radioactivity content.

A nuclear power electric generator of small nominal output can extend the benefits of nuclear technology to small developing nations. To achieve cost competitiveness through economy of scale, present nuclear plants are in the range of 1,000 to 1,500 MW. But the grid capacity of many of the developing countries is too small to justify such a large single block of power. The Generation IV Program is pursuing the concept of small, integrated, transportable lead alloy­cooled power packages in the 100-MW range that do not require refueling and could provide power over a 10-year period. A key goal is to ensure that these plants are highly resistant to proliferation.

Both the Generation IV Program and the Nuclear Power 2010 Initiative are receiving an infusion of ideas from DOE’s Nuclear Energy Research Initiative (NERI), which fosters innovative R&D on advanced nuclear energy concepts and technologies. NERI recently completed the first round of 46 research projects initiated in fiscal year 1999. The effort marshals the talents of more than 250 U.S. university students and includes collaborations with more than 25 international organizations.

Proliferation resistance

Although there has been no known diversion of weapons-usable nuclear material from safeguarded civilian facilities since the inception of the IAEA, among the problems still facing nuclear power is the need to boost resistance to proliferation. In fact, nuclear power plants are now being used to reduce proliferation risk: The potential for diversion of highly enriched uranium (HEU) and plutonium declared excess under the U.S.-Russian START nuclear arms reduction agreements. These excess weapons materials are being disposed of by converting them to fuel for electricity generation in U.S. nuclear plants. About one-third of the Russian HEU stockpile has already been processed, permanently disposing of the weapons material from 6,000 nuclear warheads.

Yet the fact that nuclear power development has been used as a cover for nuclear weapons development is cause for concern. NPT signatories need to be prevented from engaging in any such deceptions. The most critical need is to put teeth into NPT enforcement through the UN Security Council or through a separate entity such as the one evolving under the multilateral Proliferation Security Initiative. Other urgent needs are to upgrade export controls and materials inventory and to strengthen IAEA inspection and monitoring of NPT compliance. Such high-priority institutional reforms, which also apply to activities outside the scope of nuclear power, are discussed in the accompanying articles in this issue.

Beyond institutional measures, the plants themselves should incorporate improved design features that render them inherently more resistant to proliferation. Improved analytical assessments should be conducted to identify the points at which nuclear power plants and related facilities are most vulnerable and to suggest design remedies. Possible approaches include making weapons-usable materials less accessible; erecting chemical, physical, and radiation barriers; limiting the ability of an enrichment facility to produce weapons-usable material; and increasing the time required to effect a diversion. If intrinsic design features were improved, the institutional tasks of surveillance, monitoring, inspection, accountability, and physical security would also become easier. Such analyses could determine the proper balance between intrinsic features and institutional control processes. An outline of the overall assessment process, the R&D necessary to develop it, and potential intrinsic proliferation-resistant features is contained in the DOE report, Technology Opportunities to Increase the Proliferation Resistance of Civilian Nuclear Power Systems.

Another defense against proliferation is the concept of regional fuel services. Although the Eisenhower proposal for an international bank of fissionable material never materialized, the idea has merit as a means of handling those portions of the nuclear fuel cycle that are of primary concern from a proliferation standpoint: uranium enrichment and plutonium separation. If countries interested in developing nuclear power were provided such services, there would be no reason for them to invest in fuel-processing facilities that could be used to divert weapons-usable materials.

Both government and private organizations could provide such services under strict regulation, with complete transparency, and with unconstrained access for compliance monitoring. They would need to meet high standards of accreditation and have a record of compliance with the NPT. Contractual arrangements for these services would have to ensure a steady fuel supply. Large commercial facilities now provide such fuel services globally, and they could continue to do so upon accreditation under the stricter international nonproliferation regime that will be needed for the future.

The regional/international services concept could also be extended to the storage and disposition of spent fuel and high-level waste. Presently, individual nations carry these responsibilities. Although the IAEA sets international standards, they are followed at the discretion of each country. For many countries, high cost, political opposition, and a limited number of qualified sites make the development of geological repositories very difficult. Another concern is that spent fuel repositories will become less resistant to proliferation once their radiation levels have decayed for a century or so. For these reasons, cooperative regional repositories will become appropriate to provide a broad base of support for protecting these facilities.

Recently, several proposals have been made to create international spent fuel storage facilities and repositories, as well as fuel-processing facilities. In each scheme, the IAEA would be the authority responsible for verifying adherence to stringent safeguards and ensuring the transparency and accountability of related activities. Bringing such a plan to fruition will not be easy, but it should be made a goal for a continued Atoms for Peace vision.

There are strong reasons to believe that Eisenhower’s vision of serving “the peaceful pursuits of mankind” through nuclear energy can be more fully realized in the years ahead. The enormous projected growth in electricity demand to serve a greatly expanded global population and to redress the economic imbalance among nations makes clear the need. Nuclear energy, which produces essentially no air pollution or greenhouse gas emissions, can help to meet that need and be put to other peaceful uses if economic competitiveness can be achieved.

Recent proliferation challenges by rogue states and terrorists make Eisenhower’s call “to reverse the atomic military buildup” as relevant today as it was 50 years ago. The NPT, the IAEA, and the cooperation of many nations have helped stem that buildup. With the support of the UN Security Council, they could go on to remedy the current inadequacies in the international nonproliferation regime.

These actions are needed to address the weaknesses on both sides of the nuclear “bargain.” They must be supplemented by greater public acceptance of nuclear power–acceptance that can be gained only through an excellent record of safety and reliability and through open communication with the public about the benefits and the risks of nuclear power. The tasks are not easy and the outcomes not certain. What is certain is the urgent need, in the words of Eisenhower, to turn “this greatest destructive force . . . into a great boon for the benefit of all mankind.”

Recommended reading

International Institute for Applied Systems Analysis (IIASA), Global Energy Perspectives (Laxenburg, Austria: IIASA, 1998).

Nuclear Energy Research Advisory Committee of the U.S. Department of Energy, A Roadmap to Deploy New Nuclear Power Plants in the US by 2010 (Washington, D.C.: DOE, October 2001) ().

Nuclear Energy Advisory Committee of the U.S. Department of Energy, Technology Opportunities to Increase the Proliferation Resistance of Civilian Nuclear Power Systems (Washington, D.C.: DOE, January 2001).

U.S. Department of Energy, The U.S. Generation IV Implementation Strategy, Preparing Today for Tomorrow’s Energy Needs (Washington, D.C.: DOE, September 2003).

U.S. Department of Energy, Advanced Fuel Cycle Initiative Comparison Report, FY 2003 (Washington, D.C.: DOE, October 2003).


John J. Taylor ([email protected]), retired vice president for nuclear power at the Electric Power Research Institute, is a consultant to the Center for Global Security Research at Lawrence Livermore National Laboratory in Livermore, California.