An antidote to sprawl

Bruce Babbitt, former Arizona governor and U.S. secretary of the Department of the Interior, proposes not so much a new vision of land use in the United States, as indicated in the subtitle of his book, but rather a new vision of land-use planning. In Babbitt’s scheme, the federal government would take a key role in preserving natural and cultural landscapes, largely by reining in urban sprawl. The government would accomplish these goals not by simply mandating change but by working closely with state and local governments, as well with property owners, developers, environmental organizations, and other constituencies, in a political system that Babbitt aptly calls “participatory federalism.”

Cities in the Wilderness comes with endorsements on the back cover from Harvard biologist E. O. Wilson, architect Frank Gehry, and former president Bill Clinton. Their praise is well deserved. Babbitt’s main argument is bold and compelling, his presentation engaging and informative. Throughout the book, one sees the distillation of wisdom acquired over many years of fighting environmental battles at the national, state, and local levels. Babbitt has been there, and it shows.

Babbitt’s underlying argument is that destructive urban sprawl continues largely unabated because our political institutions have not been adequate to the task of controlling it. Land-use planning has generally been the province of local governments, which rarely have the political power or financial resources to contend with developers constantly seeking exemptions from existing plans. The complex welter of local jurisdictions makes the situation even more intractable. If sprawl is successfully combated in one city, builders can usually move to lands just outside the city limits or to adjacent counties. Only the federal government, Babbitt contends, has the scale of power and the scope of vision necessary to ensure that wild and pastoral landscapes remain undeveloped.

The federal government can best take on its land-planning responsibility, Babbitt argues, by using and expanding its existing powers, especially as embodied in legislation such as the Endangered Species Act, the Clean Water Act, and the National Antiquities Act. But this does not mean unilateral federal action. Babbitt favors a deliberative approach that involves intense negotiations and numerous tradeoffs. Far from utopian, his method is fully pragmatic, based firmly on his own lengthy experiences in environmental politics.

Much of Cities in the Wilderness is devoted to case studies that exemplify how informed federal leadership has already brought about major environmental victories; victories, moreover, that are locally and nationally popular. The first substantive chapter deals with the Everglades, where a state and federal partnership is now promising not just environmental preservation but actual restoration. Babbitt tells the complicated story of the negotiations involved in developing a comprehensive Everglades restoration plan, which involved not just taking on developers, “big sugar,” and the Army Corp of Engineers, but actually bringing them over. His overriding question—what went right?—is essential if perhaps counterintuitive. “Why such a spectacular success in the Everglades,” Babbitt asks, “in a time of failures elsewhere?” He answers his query by addressing the specificities of southern Florida’s natural and political landscapes and by detailing the hard work and creative approaches undertaken by the federal negotiating team. Much opposition was encountered along the way. Even prominent environmental leaders at one point accused Babbitt and company of having sold them out, a charge that Babbitt has endured on several other occasions. The end result for the Everglades, however, is rightfully seen as a landmark of both environmental protection and cooperative federal leadership.

BOUNDING URBAN AREAS, AND THUS FORCING FURTHER GROWTH TO BE INTENSIVE RATHER THAN EXTENSIVE, EMERGES AS A CORNERSTONE OF BABBITT’S LARGER PROJECT.

From the Everglades, Babbitt moves on to consider a number of other federal/local attempts to restrict suburbanization and in so doing to preserve natural habitats. Such efforts have met with some success, although not as impressively as in the case of southern Florida. Most of Babbitt’s other examples come from the West, ranging from the conserving of the last tracks of coastal sage in southern California to the struggle to keep the few remaining perennial streams in southern Arizona from running dry. The most eye-opening of these case studies is that of Las Vegas.

Las Vegas began to explode in size and population, Babbitt explains, as the ironic consequence of a deal cut to preserve Lake Tahoe, hundreds of miles away on the California-Nevada border. Sensitive lands around the lake were to be protected through purchase, an expensive option to say the least. The requisite funds were to be obtained by selling federal lands around Las Vegas to developers. Although the plan helped safeguard Lake Tahoe, it came at the price of massive sprawl in southern Nevada, in the process further endangering several already threatened species, including the desert tortoise. The challenge thus became one of limiting the future expansion of Las Vegas while making sure that sensitive ecosystems on its periphery would be adequately protected. Fortunately, the city’s residents simultaneously began to demand parks, open space, and other outdoor amenities. When the new U.S. senator from Nevada, Harry Reid, took up their cause, an urban perimeter was established around the city, promising ecological security for outlying areas. Reid and other environmental leaders were greatly aided in this endeavor by the Endangered Species Act.

Las Vegas, with its gaudy exuberance and wasteful use of resources, can hardly be considered an environmental model. Babbitt’s aim, however, is more modest that that. His new land-use vision concentrates on areas beyond the urban fringe, in which reasonably intact ecosystems can still be found. He would not lock up such places, not most of them at any rate, from economic exploitation, but rather would subordinate mining, grazing, and logging to an overriding public mandate for long-term biological integrity. Such a mandate will be all but impossible to fulfill as long as cities continue to expand willy-nilly into their rural peripheries. Bounding urban areas, and thus forcing further growth to be intensive rather than extensive, emerges as a cornerstone of Babbitt’s larger project. He correspondingly argues that “cities . . . should be visualized like an archipelago, as islands surrounded by a sea of open landscapes” (hence the book’s title). A more intensive system of urban land use, moreover, would help the United States reduce its dependence on oil, to both environmental and geopolitical advantage, which presumably is the reason why E. O. Wilson in his blurb on the back cover calls the book not only “marvelous” but also “patriotic.”

The threat of rural sprawl

Babbitt’s concerns go beyond the threat of urban expansion to include what he calls, in a delightfully oxymoronic turn of phrase,“rural sprawl.”In his cleverly entitled chapter “What’s the Matter with Iowa?”, Babbitt defines this problem as the relentless spread of corn and soybean fields from fence row to fence row, leaving virtually no space for wildlife and threatening the state’s agricultural base with soil erosion and water contamination. Such agricultural overextension, Babbitt carefully shows, is not the result merely of market forces but is also partially rooted in the often illogical and not uncommonly contradictory policies of the U.S. government. Especially considering the historically rooted role of the federal government in excessive agricultural extension, it would seem reasonable to expect Washington to take a lead role here as well. If only 20% of the tall grass prairies could be reborn, Babbitt argues, effective ecosystem functions could be restored in the Corn Belt. In this instance, international pressure along with the World Trade Organization (WTO) might force the federal government to reduce if not eliminate the agricultural subsidies that so encourage overplanting. The most reasonable approach in such an event, Babbitt argues, would be a WTO-compliant farm nexus that would engage landowners in a comprehensive program of land and water restoration as a condition of federal aid.

One of the pleasures in Cities in the Wilderness is Babbitt’s willingness to reveal how eco-political negotiations work in practice. At times he seems keen to divulge secrets that almost anyone else in his shoes would have preferred to keep concealed. When Babbitt was governor of Arizona, for example, he quickly ran into a major water resource dilemma. The federal government was then building the Central Arizona Project to deliver Colorado River water to the rapidly urbanizing south-central portion of the state, but by law it was supposed to withhold deliveries to any jurisdiction that had failed to develop a plan to control groundwater depletion. Unlike his predecessors, then–Secretary of the Interior Cecil Andrus intended to enforce the law. Babbitt, however, felt that he had no political choice but to demand that Washington stay out of Arizona’s local business of apportioning water. In private, however, he negotiated with Andrus to force the creation of the mandated groundwater plan. At one point, Babbitt urged the secretary to threaten that the federal government would pull support for the entire water project if compliance was not gained. Andrus agreed and made the threat, which Babbitt immediately and publicly denounced, as he had promised. Once the ritual denunciation had been made, however, the hard work of developing a state groundwater plan began in earnest.

Cities in the Wilderness embodies that rare combination of an argument that is at once pragmatic, honest, and visionary. One can only hope that it will be well read and well heeded. It will, of course, generate substantial opposition. Many will reject on principle Babbitt’s call for increased federal involvement in land-use management, whereas others will be threatened by many of his specific proposals, such as his call for eliminating grazing on federally owned desert lands. Certainly the current administration in Washington will not take kindly to most of Babbitt’s proposals. But even President George W. Bush has recently expressed fears about the country’s “addiction to gasoline.” So perhaps the time has finally come when such ideas can get the attention that they deserve.

Archives – Spring 2006

TIM ROLLINS + K.O.S. AND BANNEKER HIGH SCHOOL PARTICIPANTS, On the Nature of the Universe (after Lucretius), Watercolor, acrylic ink, India ink, aqaba paper, collage, and book paper on canvas, 76 x 54 inches, 2005. (From the collection of the National Academy of Sciences)

On the Nature of the Universe (after Lucretius)

Tim Rollins, a conceptual artist and teacher, established an after-school art workshop in 1982 for teenagers whose schools classified them as learning disabled, dyslexic, or emotionally handicapped. The self-named K.O.S. (Kids of Survival) is an evolving group of these teens who regularly attend Rollins’s workshop in New York’s South Bronx. The group’s artworks are included in the permanent collections of more than seventy major museums, including the Hirshhorn Museum and Sculpture Garden and the National Gallery of Art in Washington, D.C., and the Whitney Museum of American Art in New York City.

The National Academy of Sciences commissioned Rollins to lead a three-day workshop with students at Benjamin Banneker High School, a math and science magnet school in Washington, D.C., known for academic excellence and highly motivated students. During the workshop, participants created the artwork pictured here based on their study of Lucretius’s ancient didactic poem On the Nature of the Universe. In this epic, written in the first century B.C., the poet aims to free humanity from its superstitions and fears about death and celebrates the teachings of Epicurus, a Greek theorist who believed that the cosmos was comprised of atoms.

Forum – Spring 2006

In “Rethinking, Then Rebuilding New Orleans” (Issues, Winter 2006), Richard E. Sparks presents a commendable plan for rebuilding a limited and more disaster-resistant New Orleans by protecting the historic city core and retreating from the lowest ground, which was most severely damaged after Katrina. This is an admirable plan, but one with zero chance of implementation without an immediate and uncharacteristic show of political backbone. The more likely outcome is a replay of the recovery from the 1993 Midwestern flood. That disaster led to a short-term retreat from the Mississippi River floodplain, including $56.3 million in federal buyouts in Illinois and Missouri. More recently, however, St. Louis alone has seen $2.2 billion in new construction on land that was under water in 1993. The national investment in reducing future flood losses has been siphoned off in favor of local economic and political profits from exploiting the floodplain. The same scenario is now playing out in New Orleans, with city and state leaders jockeying to rebuild back to the toes of the same levees and floodwalls that failed last year.

Another common thread between 1993 and 2005 is that both disasters resulted from big storms but were caused by overreliance on levees. Hurricane Katrina was not New Orleans’ “perfect storm.” At landfall Katrina was just a Category 3 storm, striking the Mississippi coast and leaving New Orleans in the less damaging northwest quadrant. For Biloxi and Gulfport, Katrina was a hurricane disaster of the first order, but New Orleans awoke on Aug. 30 with damage largely limited to toppled trees and damaged roofs. For New Orleans, Katrina was a levee disaster: the result of a flood-protection system built too low and protecting low-lying areas considered uninhabitable through most of the city’s history.

The business-as-usual solution for New Orleans would include several billion dollars’ worth of elevated levees and floodwalls. But what level of safety will this investment actually buy? Sparks mentions a recent study by the Corps of Engineers that concluded that Mississippi River flood risks had dropped at St. Louis and elsewhere, despite ample empirical evidence of flood worsening. The central problem with such estimates is they are calculated as if disaster-producing processes were static over time. In actuality, these systems are dynamic, and significant changes over time have been documented. New Orleans could have been inundated just one year earlier, except that Hurricane Ivan veered off at the last moment. Hurricane Georges in 1998 was another near-miss. With three such storms in the past eight years—and still no direct hit to show the worst-case scenario— the 2005 New Orleans disaster may be repeated much sooner than official estimates would suggest.

Current guesstimates suggest that a third to a half of New Orleans’ pre-Katrina inhabitants may not return. This represents a brief golden opportunity to do as Sparks suggests and retreat to higher and drier ground. Previous experience shows us that our investments in relief and risk reduction will be squandered if short-term local self-interest is allowed to trump long-term planning, science, and leadership.

NICHOLAS PINTER

Professor of Geology Environmental Resources and Policy Program

Southern Illinois University

Carbondale, Illinois


Recently, we have heard much about the loss of wetlands and coastal land in southern Louisiana. Richard E. Sparks asserts that “If Hurricane Katrina, which pounded New Orleans and the delta with surge and heavy rainfall, had followed the same path over the Gulf 50 years ago, the damage would have been less because more barrier islands and coastal marshes were available then to buffer the city.” The inverse of this logic is that if we can somehow replace the vanished wetlands and coastal land, we can create a safer southern Louisiana.

Many are now arguing for a multi–billion-dollar program to rebuild the wetlands of coastal Louisiana. After Katrina, the argument has focused on the potential storm protection afforded by the wetlands. I believe that restoring Louisiana’s wetlands is an admirable goal. The delta region is critical habitat and a national treasure. But we should never pretend that rebuilding the wetlands will protect coastal communities or make New Orleans any safer. Even with the wetlands, southern Louisiana will remain what it has always been: extremely vulnerable to natural hazards. No engineering can counteract the fact that global sea level is rising and that we have entered a period of more frequent and more powerful storms. Implying, as some have, that environmental restoration can allow communities to remain in vulnerable low-lying areas is irresponsible. Yes, let’s restore as much of coastal Louisiana as we can afford to, but let’s do it to regain lost habitat and fisheries, not for storm protection.

In regard to the quote above, it is unlikely that additional wetlands south of the city would have prevented the flooding from Katrina (closing the Mississippi River Gulf Outlet is another issue). I certainly agree with Sparks that we should look to science for a strategy to work with nature and to get people out of harm’s way.

ROBERT S. YOUNG

Associate Professor of Geology

Western Carolina University

Cullowhee, North Carolina


Richard E. Sparks presents a comprehensive and sensible framework to help address the flood defense of the greater New Orleans area. The theme of this framework is the development of cooperation with nature, taking advantage of the natural processes that have helped protect this area in the past from flooding from the Mississippi River, hurricane surges, and rain. Both short-term survival and long-term development must be addressed simultaneously, so that in the rush to survive we do not seriously inhibit long-term protection. A massive program of reconstruction is under way to help ensure that the city can withstand hurricanes and deluges until the long-term measures can be put in place. I have just returned from a field trip to inspect from the air and on the ground the facilities that are being reconstructed. I had many discussions with those who are directing, managing, and performing the work. Although significant progress has been made, the field trip left me with a feeling of uneasiness that the rush to survive is developing expectations of protection that cannot be met; a sufficiently qualified and experienced workforce, materials, and equipment are lacking. In this case, a quick fix is not possible, and sometimes the fixes are masking the dangers.

A consensus on the ways to provide an adequate flood defense system for the greater New Orleans area is clearly developing. What has not been clearly defined is how efforts will be mobilized, organized, and provisioned to give long-term protection. Flood protection is a national issue; it is not unique to the greater New Orleans area. The entire Mississippi River valley complex and the Gulf of Mexico and Atlantic coasts are clearly challenged by catastrophic flooding hazards. There are similar challenges lurking in other areas, such as the Sacramento River Delta in California. What is needed is a National Flood Protection Act that can help unify these areas, encouraging the use of the best available technology and helping ensure equitable development and the provision of adequate resources.

Organizational modernization, streamlining, and unification seem to be the largest challenge to realizing this objective. This challenge involves much more than the U.S. Army Corps of Engineers. The Corps clearly needs help to restore the quality engineering of its previous days to help it adopt, advance, and apply the best available technology. The United States must also resolve that what happened in New Orleans will not happen again, there or elsewhere. Leadership, resolution, and long-term commitment are needed to develop high-reliability organizations that can and will make what needs to happen actually happen. This requires keeping the best of the past; dropping those elements that should be discarded; and then adopting processes, personnel, and other elements that can efficiently provide what is needed. It can be done. We know what to do. But will it be done? The history of the great 2005 flood of the greater New Orleans area (and the other surrounding areas) shows that we pay a little now, or much much more later.

ROBERT BEA

Department of Civil & Environmental Engineering

University of California Berkeley


Although the suggestion by Richard E. Sparks of bypassing sediment through the reservoirs and dams that have been built on the Missouri River merits a good deal of further consideration, one must be warned at the outset that its effect on the delivery of restorative sediment to the coastal wetlands of Louisiana would not be immediate. Although the historical record shows that the downstream sediment loads decreased dramatically and immediately after the completion of three (of the eventually five new) mainstem Missouri River dams in 1953–1954, the reversal of this process—bypassing sediment around the dams or even completely removing the dams—would likely be slow to start and would require at least half a century to have its full effect on the sediment loads of the Mississippi River near New Orleans.

Even if the sediment that passed Yankton, South Dakota (present site of the farthest-downstream dam on the Missouri River mainstem), were restored to the flowing river in its pre-dam quantities, at least two other factors would retard its progress toward the Gulf of Mexico. First are the changed hydraulics of the river that result from the operations of the dams themselves, especially the holding back of the annual high flows that formerly did much of the heavy lifting of sediment transport. Second are the architectural changes that have been engineered into the river channel over virtually the entire 1,800 miles of the Missouri and Mississippi between Yankton and New Orleans. Those in the Missouri River itself (the first 800 miles between Yankton and the confluence with the Mississippi River near St. Louis) may enhance the downriver progress of sediment because they consist mainly of engineering works that have narrowed and deepened the preexisting channel so as to make it self-scouring and a more efficient conduit for sediment-laden waters. Those in the 1,000 miles of the Mississippi River between St. Louis and New Orleans, however, are likely to impede, rather than enhance, the downstream progress of river sediment. This is especially true of the hundreds of large wing dams that partially block the main channel and provide conveniently located behind-and-between storage compartments, in which sediment is readily stored, thus seriously slowing its down-river movement. Eventually we might expect a new equilibrium of incremental storage and periodic remobilization to become established, but many decades would pass before the Mississippi could resume its former rate of delivery of Missouri River–derived sediment to the environs of New Orleans.

We probably could learn much from the experience of Chinese engineers on the Yellow River, where massive quantities of sediment have been bypassed through or past large dams for decades. As recently as 50 to 60 years ago, before the construction of the major dams and their bypassing works, the Yellow River was delivering to its coastal delta in the gulf of Bo Hai a quantity of sediment nearly three times the quantity that the undammed Missouri-Mississippi system was then delivering to the Gulf of Mexico. Despite the presence of the best-engineered bypassing works operating on any of the major river dams of the world, however, the delivery of sediment by the Yellow River to its coastal delta is now nearly zero.

ROBERT H. MEADE

Research Hydrologist Emeritus

U.S. Geological Survey

Denver Federal Center

Lakewood, Colorado


Save the Kyoto Protocol

We agree with Ruth Greenspan Bell (“The Kyoto Placebo,” Issues, Winter 2006) that global climate change is a problem deserving of serious policy action; that many countries have incomplete or weak regulatory systems lacking transparency, monitoring, and enforcement; and that many economies lack full market incentives to cut costs and maximize profits. We agree that policy choice must be pragmatic and sensitive to social context. We also agree that the Kyoto Protocol has weaknesses as well as strengths. But we strongly disagree with her deprecating view of greenhouse gas (GHG) emissions trading and her implicit endorsement of “conventional methods of stemming pollution”: command-and-control regulation. Emissions trading is our most powerful and effective regulatory instrument for dealing with global climate change (and indeed one of the strengths of the Kyoto regime now being deployed internationally). Bell’s arguments to the contrary are shortsighted and misplaced.

In suggesting that emissions trading will not work in many countries, Bell makes a cardinal error of misplaced comparison. The issue is not how global GHG emissions trading compares to national SO2 emissions trading in the United States; the issue is how GHG emissions trading compares to alternative regulatory approaches at the international level, including which approach is most likely to overcome the problems that Bell identifies. All regulatory tools, including command and control, taxes, and trading, require monitoring and enforcement. (Bell is wrong to imply that cap and trade somehow neglects or skirts monitoring and enforcement; it does not.) And any global GHG abatement strategy must confront the problems of weak legal systems, corruption, and inefficient markets that Bell highlights.

Given these inevitable challenges, cap and trade is far superior to Bell’s preferred command-and-control regulation in reducing costs, encouraging innovation, and engaging participation. Unlike command regulation, emissions trading can also help solve the implementation problems that Bell notes, by creating new and politically powerful constituencies—multinational firms and investors and their domestic business partners in developing and transition countries—with an economic stake in the integrity of the environmental regulatory/trading system. Under command regulation, compliance is a burden to the firm, but under market-based incentive policies compliance improves because costs are reduced, firms are rewarded for outperforming targets, and firms holding allowances have incentives to lobby for effective enforcement and to report cheating by others, in order to maintain the value of their allowances. In short, if governance is weak, the solution is to reorient incentives toward better governance.

Bell’s suggestion that financial and market actors in developing and transition countries are too unsophisticated to master the complexities of emissions trading markets is belied by their skill in energy markets and other financially rewarding international commodity markets. And her assertion that the information demands of trading systems are greater than those of command and control is contradicted by the findings of a recent Resources for the Future study; it found that even though monitoring costs are higher under tax and trading systems (because they monitor actual emissions, which are far more important environmentally than is the installation of control equipment, which is the typical focus of command regulation), total information burdens are actually higher under command systems (which also require governments to amass technical engineering details better understood by firms). Monitoring the installation of control equipment may even be misleading, because equipment can be turned off, broken, or overtaken by increases in total operating activity level; thus, it is worth spending more to monitor actual pollution. (Many GHG emissions can be monitored at low cost by monitoring fuel inputs.) Finally, the social cost savings of GHG emissions trading systems would dwarf any additional information costs they might require.

Moreover, using command-and-control policies will entrench the very features of central planning that Bell criticizes and, worse, will offer a tool to the old guard to reassert bureaucratic state control in countries struggling to move toward market economies. By contrast, cap and trade will not only offer environmental protection at far less cost, thereby fostering the adoption of effective environmental policies in countries that cannot afford expensive command regulation, but emissions trading will also help inculcate market ideas and practices in precisely those countries that need such a transition to relieve decades or centuries of dictatorial central planning.

The weakness of Kyoto is not that it employs too much emissions trading in developing and transition countries, but rather too little.

JONATHAN B. WIENER

Duke University

Durham, North Carolina

University Fellow

Resources for the Future

Washington, DC

RICHARD B. STEWART

New York University

New York, New York

JAMES K. HAMMITT

Harvard University

Cambridge, Massachusetts

DANIEL J. DUDEK

Environmental Defense

New York, New York


THE LONG-TERM ENERGY R&D TREND LINE FROM 1974 TO 2004 SUGGESTS THAT THE HIGH INVESTMENT LEVELS OF THE LATE 1970S MIGHT BE VIEWED AS A DEPARTURE FROM THE HISTORICAL NORM.

Energy research

In “Reversing the Incredible Shrinking Energy R&D Budget” (Issues, Fall 2005), Daniel M. Kammen and Gregory F. Nemet rightly call attention to the fact that current energy R&D investments are significantly lower than those of the 1980s. However, having tracked energy R&D trends for more than a decade now, we conclude that the story of public-sector energy R&D is more nuanced and multifaceted than Kammen and Nemet’s piece indicates. In fact, public-sector energy R&D investments across the industrialized countries have largely stabilized since the mid-1990s in absolute terms. However, although Western economies and national research portfolios have expanded over that period, energy R&D has lost ground in each industrialized country relative to its respective gross domestic product and overall R&D investment. These findings are based on analysis of energy R&D investment data gathered by the International Energy Agency (IEA) and are supported by Kammen and Nemet’s data as well.

During the energy crises of the 1970s, the aggregate energy R&D investments of IEA governments rose quickly from $5.3 billion in 1974 to their peak level of $13.6 billion in 1980. Investment then fell sharply before stabilizing in 1993 at about $8 billion in real terms. In each of the ensuing years, aggregate investment has deviated from that level by less than 10%. Below this surface tranquility, however, significant changes in the allocation of energy R&D resources are occurring. For example, some countries that once had large nuclear R&D programs, such as Germany and the United Kingdom, now perform virtually no nuclear energy R&D. Many fossil energy programs have also experienced dramatic funding reductions, although fuel cell and carbon dioxide capture and storage components of fossil programs have become more significant parts of the R&D portfolio. Conversely, conservation and renewable energy programs have fared well and have grown to constitute the largest energy R&D program elements in many countries.

The long-term energy R&D trend line from 1974 to 2004 suggests that the high investment levels of the late 1970s might be viewed as a departure from the historical norm. The rapid run-up in energy R&D levels in the late 1970s and early 1980s was driven by global energy crises and policymakers’ expectations regarding the energy future in the light of those events. Because new energy technologies were considered indispensable for resolving the energy crises, energy R&D investments rose accordingly. Since then, however, perceptions either of the gravity of the world’s energy problems or of the relative value of energy R&D as a vehicle for energy technology innovation (or both) have shifted.

Public- and private-sector investments in R&D, including energy R&D, are made with an expectation of societal and financial returns sufficient to warrant them. Thus, changing the climate for energy R&D will involve informing perceptions about the prospective benefits of energy R&D investment. Communicating the evolution of funding levels during the past several decades is not sufficient to fundamentally alter the predominant perceptions of the potential value of energy R&D. More broadly, the perceived benefits of energy R&D reflect society’s beliefs about the value of energy technology in addressing priority economic, security, and environmental challenges. Ultimately, it will be the evolution of these perceptions that alters the environment for public- and private-sector support for energy R&D investments and that enables society to craft effective, durable strategies to change the ways in which energy is produced and consumed.

PAUL RUNCI

LEON CLARKE

JAMES DOOLEY

Joint Global Change Research Institute

University of Maryland

College Park, Maryland


Remembering RANN

The article on RANN (Research Applied to National Needs) by Richard J. Green and Wil Lepkowski stimulated my interest and attention (“A Forgotten Model for Purposeful Science,” Issues, Winter 2006). As said in the article, I became interested enough during my White House time (1970–1973) to set up an interagency committee to follow RANN’s progress. I know Al Eggers and knew Bill McElroy as a close friend and an activist for both science and technology and admire their courage in advocating RANN. However, the world has changed dramatically since those times. Even then, the idea of easing the flow of knowledge, creativity, and technology from its sources to its ultimate users, wherever they might be, was current.

The Bayh-Dole legislation was a landmark, ceding patent ownership to academic institutions. Other keys to success in innovation have emerged from managerial theory (read Peter Drucker, William O. Baker, and Norman Augustine, for examples). For them and others with broad experience, RANN and its philosophy were old hat. Nevertheless, RANN brought the idea of technology transfer and its societal benefits to the fore for skeptics, including academics, dedicated researchers, and venture investors, not to mention advocates for pure science.

This was an important emergence, and from it came today’s venture capital thrusts, engineering research centers (pioneered by Erich Bloch during his tenure as National Science Foundation director), science and technology centers (also Bloch), and a raft of new departments in the universities, each of which contributes several new interdisciplinary foci for study and for spawning new disciplines. Among these are information technology (especially software-based), biotechnology, nanotechnology (yet to be accepted by the traditionalists), and energy research.

Behind this explosion of education, research, entrepreneurism, and new industries are people who recognized early that economic development and research were closely tied together. This connection goes far beyond electronics, medicine, and aerospace, which were among the leaders in the transformation described above. The impact of this revolution is difficult to appreciate by those involved in the process. However, in China, India, and other up-and-coming countries, it is looked on as a U.S.-inspired remarkable transformation of the research system.

The path from invention and research to the marketplace contains many obstacles. So success hinges on several steps not ordinarily advertised. One that is often cited is marketing, despite the authors’ disdain for market-based decisions. Too few engineers or scientists pay much attention to marketing. Another matter requiring attention is intellectual property and patents. Consortia are now cited as way stations on the road to commercialization.

The Green-Lepkowski article recognizes the importance of purpose in these matters. At Bell Laboratories, one of the keys to success was said to be a clear idea of purpose (in its case, “to improve electrical communication”). All new projects were held up to that standard. These and other ways of progressing innovation were not mentioned in the article, but I know that the authors understand the importance of such considerations. Fortunately, there are brigades of scientists, engineers, and medical practitioners who ply the many paths to innovation. These many explorers are the modern RANN adventurers.

ED DAVID

Bedminster, New Jersey

Ed David is a former presidential science advisor


“A Forgotten Model for Purposeful Science” by Richard J. Green and Wil Lepkowski looks with admiration and nostalgia to the RANN program supported by the National Science Foundation in the 1970s. I have a somewhat different recollection. At the time, I was an active scientist at the National Center for Atmospheric Research (NCAR) in Boulder, Colorado, where one of the RANN programs was being conducted, probably the largest in the atmospheric sciences. That program, titled the National Hail Research Experiment (NHRE, rhymes with Henry) was an attempt to suppress hail by cloud seeding. Weather modification, done by introducing freezing nuclei into rain clouds, was already controversial.

The NHRE program was, however, motivated by claims of success from Soviet cloud physicists, enhanced by our insecurity regarding Soviet successes with bombs, satellites, and missiles. The Soviet claims were not well documented but could not be totally discounted. Strong scientific and political pressure, mostly from outside, forced NCAR to take on this project, which became a major part, and almost the only growing part, of the NCAR program.

The basic hypothesis was that the freezing of raindrops into hailstones near the tops of strong thunderstorms was controlled by freezing nuclei, which vary greatly in the ambient atmosphere. When the nucleus population is small, as it commonly is, a small number of hailstone embryos, small ice particles, are generated, but they may grow to large and damaging size by collecting supercooled raindrops as they fall. The introduction of additional nuclei will increase the number of hail embryos but decrease their ultimate size by increasing competition for water drops. The Soviets claimed that they could identify potentially damaging hailstorms, introduce freezing nuclei in antiaircraft shells, and thereby largely remove the risk. A U.S. attempt to replicate the claimed Soviet success would be limited by the inability to throw artillery into a sky full of general and commercial aviation. Instead, it had to be done by more expensive aircraft-based seeding, which was to occur over a region of high hailstorm frequency but low population and potential economic impact.

After a few years of intensive effort, it was determined that the results were, at best, uncertain, made almost indeterminate by the very large natural variance of hailstorm intensity and by deficiencies in the assumed hailstorm mechanism. Later, it became evident that the claimed Soviet successes were, at the most charitable, overoptimistic. After one NHRE director died and the second suffered the only partial failure in his brilliant career, the program was shut down. Unintentionally, it was more of a success as basic than applied science, since a great deal was discovered about hailstorm structure and behaviour and methods of aircraft and radar observation.

Aside from the fairly high cost, the major deficit of NHRE was further denigration of the scientific view of weather modification, which already suffered from overly optimistic or dishonest claims. As pointed out in a recent National Academy of Sciences report, scientific research on weather modification in this country and most of the world is even now almost negligible, although the technological methodologies are much greater and many operational programs, including hail suppression, are in progress. Would this history be different if hail suppression had been considered an uncertain scientific hypothesis, which could first be examined and tested on a small scale and later expanded as circumstances allow? Who knows?

Not having been in the senior management of NCAR or closely associated with NHRE, I cannot say how much, if any, of their problems were due to the RANN structure. Many of the scientific staff shared the intolerant view toward forced application stated by Green and Lepkowski. Since those days, NCAR has developed a large division dedicated to applied science, which seems successful and sometimes appears to bail out underfunded basic science programs, though not usually with National Science Foundation money. From this view, the need for a revived RANN is not obvious.

DOUGLAS LILLY

Professor Emeritus

University of Oklahoma

Norman, Oklahoma


Transformational technologies

In “Will Government Programs Spur the Next Breakthrough?” (Issues, Winter 2006), Vernon W. Ruttan challenges readers to identify technologies that will transform the economy and wonders whether the U.S. government will earn credit for them. My list includes:

Superjumbo aircraft. The Airbus 380 transports 1.6 times the passenger kilometers of a Boeing 747 and will thus flourish. U.S. airframe companies, not the U.S. government, missed the opportunity to shuffle the billions between megacities.

Hypersonic aircraft. After a few decades, true spaceplanes will top jumbos in the airfleet and allow business travelers to commute daily in an hour from one side of Earth to the other. Japan and Australia cooperated on successful tests in July 2005. NASA remains in the game.

Magnetically levitated trains. For 3 years, China has operated a maglev that travels from downtown Shanghai to its airport in 8 minutes, attaining a speed of 400 kilometers per hour. In 2005, a Japanese maglev attained 500 kilometers per hour. Maglev metros, preferably in low-pressure underground tubes, will revolutionize transport, creating metros at a continental scale and jet speeds with minimal energy demand. The U.S. government is missing the train, though supporting some relevant work on superconducting cables.

Energy pipes for transporting hydrogen and electricity. Speaking of superconductivity, cool pipes storing and carrying hydrogen wrapped in superconducting cables could become the backbone of an energy distribution supergrid. The U.S. Department of Energy (DOE) listens but invests little. The DOE does work on the high-temperature reactors that could produce both the electricity and hydrogen and make Idaho the new Kuwait, and cheaper to defend FILES/BigGreen.pdf).

Large zero-emission power plants (ZEPPs) operating on methane. To achieve the efficiency gains Ruttan seeks and sequester the carbon dioxide about which he worries, the United States should build power plants at five times the scale of the present largest plants and operating at very high temperatures and pressures. The California company Clean Energy Systems has the right idea, and we should be helping them to scale up their 5-megawatt Kimberlina plant 1000-fold to 5 gigawatts. Congress forces public money into dirty laundering of coal (www.clean energystems.com).

Search technologies. We already delight in information search and discovery technologies invented in the 1980s by Brewster Kahle and others with a mix of private and public money. We underestimate the revolutionary market-making and efficiency gains of these technologies that make eBay and all kinds of online retail possible. Disintermediation may be the Ruttan revolution of the current Kondratiev cycle, as Kahle’s vision of universal access to all recorded knowledge fast happens (www.archive.org).

Action at a distance. Radio-frequency identification (RFID) tags, remote controls (magic wands), voice-activated devices, and machine translation, all heavily subsidized by U.S. military money, will make us credit the U.S. government for having murmured “Open, sesame.”

JESSE AUSUBEL

Program Director

Alfred P. Sloan Foundation

New York, New York


THE ROOTS OF TODAY’S AMBIVALENCE TOWARD AND OPPOSITION TO TECHNOLOGY POLICY EXTEND BACK TO THE EARLY YEARS OF THE REPUBLIC, WHEN THE FEDERALISTS AND ANTI-FEDERALISTS DISAGREED OVER FISCAL POLICY AND TRADE, DESIRABLE PATHWAYS OF ECONOMIC DEVELOPMENT, AND THE ROLE OF THE STATE IN CHARTING AND TRAVERSING THEM.

Americans believe in public support of scientific research but not in public support of technology. The roots of today’s ambivalence toward and opposition to technology policy extend back to the early years of the republic, when the Federalists and anti-Federalists disagreed over fiscal policy and trade, desirable pathways of economic development, and the role of the state in charting and traversing them. Those debates were tangled and confusing; here the point is simply that the question of whether to provide direct support for technology development has a lengthy history and the verdict has generally been negative. Late in the 18th century, Alexander Hamilton and the Federalists lost out. During the 19th century, giveaways to railway magnates and the corruption that accompanied them sparked a reaction that sealed the fate of “industrial policy.” World War II and especially the Korean War transformed the views of the U.S. military toward technology but had little impact otherwise, while the expansion of research funding by the National Institutes of Health to the near-exclusion of support for applications, which have been allowed to trickle out with impacts on health care that seem vanishingly small in aggregate (as suggested by examples as different as the lack of improvement in health outcomes despite ever-growing expenditures and the recent stumbles of major pharmaceutical firms), demonstrates that hands-off attitudes toward technology utilization continue to predominate in Washington.

Agriculture is the exception, coupling research with state and federal support for diffusion through a broad-based system of county-level extension. This policy too can be traced to the nation’s beginnings. As president, George Washington himself urged Congress to establish a national university charged with identifying best practices in the arts of cultivation and animal husbandry and fostering their adoption by farmers. Political construction could not advance until the Civil War, after which state governments took the first steps, attaching agricultural experiment stations to land-grant colleges and, from late in the century, sending out extension agents to work directly with farmers. In 1914, the federal-state “cooperative” extension system began to emerge. The objective was to raise the income levels of farm families at a time of widespread rural poverty. Small farmers were seen as backward and resistant to innovation and extension agents as something like schoolteachers. Research and extension were responses to a social problem, not the perception of a technological problem. Although these policies deserve much of the credit for the increases in crop and livestock yields that made Kansas wheat and Texas beef iconic images of American prosperity, the model found no imitators after World War II. Widespread long-term increases in productivity could generate nothing like the awe of innovations associated with war: not only the atomic bomb, but radar and jet propulsion.

What does this brief history suggest concerning the question posed in the title of Vernon W. Ruttan’s new book, Is War Necessary for Economic Growth? That if war is not strictly necessary, some sort of massive social problem is, such as the plight of farm families a century ago. Of the two candidates he discusses, health care seems to me the more likely to spur a new episode of technological transformation. All Americans have frequent and direct experience with health services. Cost escalation has attracted concern since the 1970s, and the health care sector of the economy now exceeds manufacturing as a proportion of national product. Recognition is spreading that output quality has been stagnant for at least a generation, a phenomenon unheard of in other major economic sectors. “Technology” includes the organization and delivery of intangible services such as health care. That is where the problems lie. They cannot be solved by miracle cures stemming from research. It may take another generation, but at some point the social organization of health care will be transformed. We should expect the impacts to be enormous and to spread throughout the economy.

JOHN A. ALIC

Washington, D.C.


River restoration

Reading Margaret A. Palmer and J. David Allan on the challenges of evaluating the efficacy of river restoration projects (“Restoring Rivers,” Issues, Winter 2006), it may be useful to recall the early challenges of regulating point-source dischargers: the traditional big pipes in the water.

In the Clean Water Act (CWA), Congress insisted that the first priority, relative to point-source regulation, be the development and imposition of technology-based effluent guidelines. These were imposed in enforceable permits, at the end of the pipe, regardless of the quality of the receiving waters.

Monitoring was primarily for compliance purposes; again, at the end of the pipe. There was very little ambient water quality monitoring, a problem persisting today. Only after these requirements were in place were water quality standards, made up of designated uses and supporting criteria, to be considered as an add-on, so to speak.

These point-source effluent guidelines resulted in measurable reductions in pollutant loadings, in contrast to the uncertainty associated with river restoration projects or best management practices generally.

Thirty-three years after passage of the CWA, we are still trying to patch together a nationwide ambient water quality monitoring program and develop adequate water quality standards and criteria for nutrients especially, encompassing the entire watershed, to guide work on nonpoint- (diffuse) as well as point-source pollution. Again, both of these priorities were slighted by the understandable focus on the technology-based effluent guidelines imposed on the point sources over three decades.

Palmer and Allan are correct in pointing out the need for monitoring and evaluating the adequacy of river restoration projects. It is imperative that we get a handle on the success, or lack thereof, of river restoration practices implemented here and now. At the very least, we need to know if the money invested is returning ecological value, regardless of the cumulative impact on water quality standards for an entire watershed at this point in time. We must pursue both kinds of information concurrently.

The Environmental Protection Agency and the U.S. Geological Service are having a hard time garnering resources for the broad water quality monitoring effort. So Palmer and Allan are correct in urging Congress, with the support of the Office of Management and Budget, to require all implementing agencies to demonstrate the effectiveness of their ongoing restoration programs pursuant to requisite criteria. This will, in the short run, cut into resources available for actual restoration work. But in the long run it will benefit these programs by enhancing their credibility with policymakers, budget managers, and appropriators.

As for the authors’ more ambitious recommendations (such as coordinated tracking systems, a national study, and more funding overall), it must be recognized that any new initiatives in the environmental area are, for the foreseeable future, going to be funded from existing programs rather than from new money. Their proposals need to be addressed in a strategic context that is mindful of competing needs such as the development of water quality standards and an ambient water quality monitoring program nationwide.

G. TRACY MEHAN III

The Cadmus Group

Arlington, Virgina

G. Tracy Mehan III is former assistant administrator for water at the Environmental Protection Agency.


River systems are more than waterways—they are vital national assets. Intact rivers warrant protection because they cannot be replaced, only restored, often at great expense. Margaret A. Palmer and J. David Allan rightly state that river restoration is “a necessity, not a luxury.” We compliment their clarity and candor; our policymakers require it, and the nation’s rivers deserve and can afford no less.

Palmer and Allan spotlight inadequate responses to river degradation. The goal of simultaneously restoring rivers while accommodating economic and population growth is still elusive. Narrow perspectives, fragmented policies, and inadequate monitoring hamper the evolution of improved techniques and darken the future of the nation’s rivers. We support their call for a concerted push to advance the science and implementation of river restoration, and we propose going even further.

How will success be achieved? In our view, we must replace restoration projects with restoration programs that are more inclusive than ever before. A programmatic approach involves citizens, institutions, scientists, and policymakers, but also data management specialists and educators. The latter are needed to facilitate efficient information transfer to other participants and the citizenry, who provide the bulk of restoration funds. A programmatic approach allows funding from multiple sources to be pooled to coordinate an integrated response. Precious funds can then be allocated strategically, rather than opportunistically, to where they are most needed.

We believe that this will bring about a needed shift in the way monitoring is conducted. Monitoring should evaluate the cumulative benefits of a restoration program, rather than the impact of an isolated project. It is well known that non–point-source pollution is detrimental to rivers, but implicating any one source with statistical significance is problematic. Similarly, a restoration program may result in measurable improvement of the river’s ecological condition that is undetectable at the scale of an individual project.

We firmly support the notion that river restoration programs should rely on a vision of change as a guiding principle. Rivers evolve through both natural and anthropogenic processes, and we suspect that their patterns of change may be as important as any other measure of ecological condition. Additionally, many rivers retain the capacity for self-repair, particularly in the western United States. In these systems, the best strategy may be to alleviate key stressors and let the river, rather than earth movers, do the work. (We acknowledge this approach may take too long if species extinctions are imminent and many rivers have been altered too greatly for self-repair.)

Restoring the nation’s rivers over the long term requires both civic responsibility and strong leadership, and perhaps even fostering a new land ethic: a culture of responsibility that chooses to restore rivers from the bottom up through innumerable daily actions and choices. Clearly, strong leadership (and reliable funding) are also needed in the near term to fuel this change from the top down. Successful approaches will vary among locales. One strategy might be to launch adaptive-managementbased restoration programs at the subregional scale in states at the leading edge of river restoration (such as Michigan, Washington, Maryland, and Florida). Successful demonstration programs could serve as models for the rest of the nation.

In rising to the challenge of restoring the rivers of this nation, we must not forget rivers beyond our borders. Meeting growing consumer demands in a global economy places a heavy burden on rivers in developing countries. Growing economies need water, meaning less water for the environment. As we rebuild our national river portfolio, we must reduce, not translocate our impacts. Failure to do so will exacerbate global water scarcity, which is one of the great environmental challenges and threats to geopolitical stability in this new century. We are optimistic that concerted efforts, such as this article by Palmer and Allan, will help to foster a global culture that chooses to sustain healthy rivers for their intrinsic value, but also for the wide array of goods and services they provide.

ROBERT J. NAIMAN

JOSHUA J. LATTERELL

School of Aquatic and Fishery Sciences

University of Washington

Seattle, Washington


Yes, in my backyard

Richard Munson’s “Yes, in My Backyard: Distributed Electric Power” (Issues, Winter 2006) provides an excellent discussion of the issues and opportunities facing the nation’s electricity enterprise. In 2003, the U.S. electricity system was judged by the National Academy of Engineering to be the greatest engineering achievement of the 20th century.

Given this historic level of achievement, what has happened to so profoundly discourage further innovation and reduce the reliability and performance of the nation’s electricity system? By 1970, diminishing economy-of-scale returns, combined with decelerating growth in demand, rising fuel costs, and more rigorous environmental requirements, began to overwhelm the electric utility industry’s traditional declining-cost energy commodity business model. Unfortunately, the past 35 years, culminating in today’s patchwork of so-called competitive restructuring regulations, have been an extended period of financial “liposuction” counterproductively focused on restoring the industry’s declining-cost past at the expense of further infrastructure investment and innovation. As a result, the regulated electric utility industry has largely lost touch with its ultimate customers and the needs and business opportunities that they represent in today’s growing, knowledge-based, digital economy and society.

The most important asset in resolving this growing electricity cost/value dilemma, and its negative productivity and quality-of-life implications, is technology-based innovation that disrupts the status quo. These opportunities begin at the consumer interface, and include:

  • Enabling the seamless convergence of electricity and telecommunications services.
  • Transforming the electricity meter into a real-time service portal that empowers consumers and their intelligent end-use devices.
  • Using power electronics to fundamentally increase the controllability, functionality, reliability, and capacity of the electricity supply system.
  • Incorporating very high power quality microgrids within the electricity supply system that use distributed generation, combined heat and power, and renewable energy as critical assets.

The result would, for the first time, engage consumers directly in ensuring the continued commercial service success of the electricity enterprise. This smart modernization of the nation’s electricity system would resolve its combined vulnerabilities, including reliability, security, power quality, and consumer value, while simultaneously raising energy efficiency and environmental performance and reducing cost.

Such a profound transformation is rarely led from within established institutions and industries. Fortunately, there are powerful new entrants committed to transforming the value of electricity through demand-guided, self-organizing entrepreneurial initiatives unconstrained by the policies and culture of the incumbency. One such effort is the Galvin Electricity Initiative, inspired and sponsored by Robert Galvin, the former president and CEO of Motorola. This initiative seeks to literally reinvent the electricity supply and service enterprise with technology-based innovations that create the path to the “Perfect Power System” for the 21st century: a system that provides precisely the quantity and quality of electric energy services expected by each consumer at all times; a system that cannot fail.

This “heretical” concept focuses on the consumer interface discussed earlier and will be ready for diverse commercial implementation by the end of 2006. For more information, readers are encouraged to visit the Web site www.galvinelectricity.org.

KURT YEAGER

President Emeritus

Electric Power Research Institute

Palo Alto, California


Tax solutions

Craig Hanson’s Perspectives piece, “A Green Approach to Tax Reform” (Issues, Winter 2006) is an excellent introduction to the topic of environmental taxes as a revenue source for the federal budget. The case for using environmental charges, however, need not be linked to the current debate over federal tax reform but is compelling in its own right. The United States lags most developed countries in its use of environmental taxes. According to the most recent data from the Organization for Economic Cooperation and Development (OECD) for 2003, the share of environmental taxes in total tax collections in the United States was 3.5%: the lowest percentage among OECD countries for which data were available in 2003 and well below that of countries such as France (4.9%), Germany (7.4%), and the United Kingdom (7.6%). Were our environmental tax collections raised to equal the OECD average in 2003 (5.7%) we would have collected over $60 billion more in environmental taxes, roughly half of what we collected from the corporate income tax in that year.

Moreover, there is good evidence that existing levels of environmental taxation in the United States fall well short of their optimal levels. For example, although the average level of taxation of gasoline in the United States is roughly $.40 per gallon, current research suggests that the optimal gasoline tax rate (taking into account pollution and congestion effects) exceeds $1.00 per gallon.

An additional point not brought out in Hanson’s article relates to the relative risk of tax versus permit programs (as used in the Clean Air Act’s SO2 trading program among electric utilities). Pollution taxes provide a measure of certainty to regulated firms. A carbon tax of $20 per metric ton of carbon, for example, assures firms subject to the tax that they will pay no more than $20 per ton to emit carbon. A cap-and-trade program has no such assurance. The price paid for emissions under a cap-and-trade program depends on the market price of permits, which could fluctuate depending on economic conditions. Permit prices for SO2 emissions, for example, ranged from roughly $130 to around $220 per metric ton in 2003.

Finally, if the United States were to use a carbon tax to finance corporate tax reform as Hanson suggests, it is worth noting that the revenue required of a carbon tax to offset revenue losses from tax integration is relatively modest and would certainly fall short of levels required to bring about significant reductions in carbon emissions. This proposal could be viewed as a first step toward a serious carbon policy whereby the United States gains experience with this new tax before committing to more substantial levels of carbon reduction.

GILBERT E. METCALF

Professor of Economics

Tufts University

Medford, Massachusetts


Craig Hanson makes a highly compelling case for green tax reform: taxing environmentally harmful activities and using the revenues to reduce other taxes such as those on personal income. As Hanson points out, such tax reforms can improve the environment and stimulate R&D on cleaner production methods, while allowing firms the flexibility to reduce emissions at lowest cost.

Taxes also have several advantages over systems of tradable emissions permits. First, the recycling of green tax revenues in income tax reductions can stimulate additional work effort and investment; this effect is absent under a system of (non-auctioned) permits. Second, appropriately designed revenue recycling can also help to offset burdens on low-income households from higher prices for energy-intensive and other environmentally harmful products. In contrast, emissions permits create rents for firms that ultimately accrue to stockholders in capital gains and dividends; however, stockholders tend to be relatively wealthy, adding to equity concerns. Third, permit prices tend to be very volatile (for example, due to variability in fuel prices), making it difficult for firms to undertake prudent investment decisions; this problem is avoided under a tax, which fixes the price of emissions. And finally, in the context of international climate change agreements, it would be easier for a large group of countries to agree on one tax rate for carbon emissions than to assign emissions quotas for each individual nation, particularly given large disparities in gross domestic product growth rates and trends in energy efficiency.

Charges for activities with socially undesirable side effects are beginning to emerge in other contexts, which should help to increase their acceptability in the environmental arena. For example, the development of electronic metering technology and the failure of road building to prevent increasing urban gridlock have led to interest in road pricing schemes at a local level in the United States. In fact, the UK government, after the success of cordon pricing in reducing congestion in central London, has proposed scrapping its fuel taxes entirely and replacing them with a nationwide system of per-mile charges for passenger vehicles, with charges varying dramatically across urban and rural areas and time of day. There is also discussion about reforming auto insurance, so that drivers would be charged by the mile (taking account of their characteristics) rather than on a lump-sum basis.

My only concern with Hanson’s otherwise excellent article is that it may leave the impression that green taxes produce a “double dividend” by both improving the environment and reducing the adverse incentives of the tax system for work effort and investment. This issue has been studied intensively and, although there are some important exceptions, the general thrust of this research is that there is no double dividend. By raising firm production costs, pollution taxes (and other regulations) have an adverse effect on the overall level of economic activity that offsets the gains from recycling revenues in labor and capital tax reductions. In short, green taxes still need to be justified by their benefits in terms of improving the environment and promoting the development of clean technologies.

IAN PARRY

Resources for the Future

Washington, DC

A Dam Shame

Jacques Leslie is a journalist, and Deep Water has a journalist’s style. It reads well and tells a compelling story. The book relates first-person accounts of three protagonists, each of whom is preoccupied with large dam projects. The stories are rich in detail and intended to teach lessons about large projects that are not to be found in the dry reports of development consultants and international organizations.

In the mid-1990s, the World Bank, at the urging of nongovernmental organizations and with the cooperation of donor organizations, supported the development of an independent commission to study the effects of large dams. The World Commission on Dams (WCD) was made up of 10 members with a wide variety of views. The commission’s 400-page report, Dams and Development, appeared in 2000 to great publicity (Nelson Mandela and other dignitaries made speeches at the press conference announcing its release), and the report remains available at the commission’s Web site (dams.org).

But as Leslie notes, with seeming regret, the report has had little lasting impact on either the World Bank or on the appetite for new dam projects. Many observers think that is all to the good. The dam-building community and many in the development community echo the World Bank’s senior water advisor John Briscoe’s view that the commission’s report was hijacked by dam opponents and does not provide a reasonable path for the future.

Today, 10 years after the hiatus on new dam building that occurred in the mid-1990s, the dam industry is in full gear, especially in South America and Asia, and is also riding a wave of privately financed small and mid-sized projects as a result of rising energy prices. The impact of Dams and Development has turned out to be less than its proponents presumably hoped for. Dam projects remain controversial, but they are moving ahead at a fast rate.

Leslie’s stories show why dam disputes are so contentious, but they offer little help in resolving disputes or setting a workable policy for guiding dam planning. He provides a vivid close-up look at disputes but fails to provide an overview that could help us gain some perspective.

Deep Water follows three of the former WCD commissioners through their daily dealings with dams and other water projects. Leslie chooses one anti-dam activist, one middle-ofthe-road scientist, and one pro-dam water manager. Deep Water is not a book written by someone sitting around a university campus or library, but by someone who spent long stretches of time in the field, on the front lines of the current water wars. Leslie, a former war correspondent, knows his job and does it well.

Medha Patkar is an anti-dam activist in the Narmada River basin of western India, in the states of Gujarat and Madhya Pradesh. Hers is a story that would try the conscience of the most avowed dam proponent. Using Gandhian tactics, she has fought a slowly losing battle with the Indian and state governments over the building of a cascade of dams on the Narmada that was funded in part by the World Bank. She has undertaken fasts, put herself in near-death situations to stop rising water levels, and led dam-site occupations and other nonviolent protests. Although the story of development on the Narmada has many facets, at its core is a story of compassionless dislocations of the local population, incredible mismanagement, and a lack of support for those most affected by the projects. The national and state governments have even ignored resettlement and environmental policies that they were obligated by treaty or law to uphold. As with many large projects, the net economic benefits of the Narmada developments may (or may not) be positive, but the distribution of those benefits is aimed squarely at entrenched economic interests and away from the people displaced, who not surprisingly are poor and relatively powerless. Dam building on the Narmada was in part responsible for the WCD.

Thayer Scudder is a California Institute of Technology anthropologist and expert on resettlement. In contrast to Medha, who is passionately focused on one river basin, Scudder is a scientist who has worked on hundreds of dam projects. Leslie follows Scudder on projects that are mostly in southern Africa (Lesotho, Zimbabwe, and Zambia). Although Scudder is intended to represent a middle-of-the-road perspective, the stories in this second part are anything but. The African projects around which the story is told are disasters. The Kariba project on the Zambezi is a resettlement and ecological nightmare, aided by a good deal of political corruption. Despite this, Scudder is portrayed as retaining his optimism about the potential benefits that can result from well-managed dam projects. Still, by this time in the book, the reader can be forgiven for wondering if there ever is such a thing. We have yet to meet a good dam in Deep Water, though Scudder holds out hope.

THE DEBATE AT THE HEART OF DEEP WATER IS NOT ABOUT DAMS; IT IS ABOUT THE INCREASING HUMAN POPULATION AND HOW TO SUSTAIN IT.

Don Blackmore is an Australian water resources manager. His is the story of a “healthy, working river.” To be clear, the Murry River projects in Australia have had adverse effects on the environment and on indigenous peoples. But because there are fewer people affected here than in the other stories, the tales of botched resettlement seem less disturbing. In addition, these projects have benefited from good governance and from the involvement of stakeholders who were richer and more sophisticated than the people fighting the Indian and African projects. The success of water management in New South Wales, Leslie argues, has as much to do with good government as with good dams.

So what does one take away from this lively book? The cover blurbs on Deep Water are misleading. In the end, the book has little, in fact, to do with dams and more to do with public administration and the mismanagement of large infrastructure projects. In reading that dam projects experience delays and cost overruns, usually do not produce the promised economic benefits, and then also cause irreversible environmental harm, I was reminded of Flyybjerg et al.’s recent book on transportation projects (Megaprojects and Risk, Cambridge, 2003). The fundamental issues are the same.

At the heart of the question of all big infrastructure projects is “us.” There are simply too many of us. The debate at the heart of Deep Water is not about dams; it is about the increasing human population and how to sustain it. Human populations need many things to survive. They need potable water, irrigation, power, protection from natural hazards, and many other things that dams provide. Other ways of providing services such as energy (for example, by the use of wood for fuel) also have adverse social and environmental outcomes. The dams debate is still waiting for the impartial analysis promised by the WCD. Deep Water provides fuel for that debate, but little light.

From the Hill – Spring 2006

Constraints continue in proposed R&D budget

President Bush’s proposed budget for fiscal year (FY) 2007, released on February 6, calls for substantial increases in key physical sciences and engineering programs as well as big boosts for alternative energy R&D and the development of new space vehicles. But these increases come at the expense of funding cuts in other parts of the federal research portfolio.

Spending for priority programs would enable nondefense R&D to rise by 1.7%, compared to the president’s proposed 0.5% cut in all nondefense discretionary domestic programs. Although the overall federal investment in research would rise by 1.9%, or $2.6 billion, to $137 billion, the increase would be less than that needed to keep pace with expected inflation. This would mean the first decline in real terms of the total federal R&D portfolio since 1996. In addition, basic and applied research alone (excluding development and facilities) would decline by 3.4% to $54.7 billion. It would be the third year in a row that research funding has declined, after peaking in 2004.

Development is the clear winner in the proposed budget; a $4.2 billion increase for weapons development in the Department of Defense (DOD) budget and an $851 million increase for space vehicles in the National Aeronautics and Space Administration (NASA) budget exceed the $2.6 billion increase in overall R&D spending, leaving all other R&D programs collectively with less money to spend.

As part of a new “American Competitiveness Initiative” (ACI) designed to address a growing wave of concern about the state of U.S. innovation, three agencies—the National Science Foundation (NSF), the Department of Energy (DOE) Office of Science, and the National Institute of Standards and Technology (NIST) laboratories in the Department of Commerce—would receive substantial budget increases after years of flat or declining funding. DOE would also benefit from the president’s “American Energy Initiative,” with large increases in its R&D portfolio for alternative energy technologies.

The National Institutes of Health (NIH) budget, after declining slightly in FY 2006 for the first time in 36 years, would be flat at $28.6 billion. All but two NIH institutes and centers would see budget cuts for the second year in a row.

R&D portfolios in other agencies face steep cuts: the Environment Protection Agency, 7.2%; the Department of Commerce’s National Oceanic and Atmospheric Administration, 6.3%; the Department of Agriculture, 16.5%; and the U.S. Geological Survey, 4.3%. Even the Department of Homeland Security (DHS) R&D activities would suffer a 5.6% decline to $1.3 billion, even though the overall DHS budget would increase.

In addition, agencies with increased budgets in some areas would take hits in others. Spending for NASA’s aeronautics programs would fall 18%, and what remains of its life sciences program would be cut 56% after a 30% cut in FY 2006. Although R&D at NIST laboratories would increase, the budget once again proposes to eliminate NIST’s Advanced Technology Program and to cut the budget of its Manufacturing Extension Partnership in half.

R&D in the FY 2007 Budget by Agency (budget authority in millions of dollars)

  FY 2005
Actual
FY 2006
Estimate
FY 2007
Budget
Change FY 06-07
Amount Percent
Total R&D (Conduct and Facilities)
Defense (military) 70,269 72,485 74,076 1,591 2.2%
S&T (6.1-6.3 + medical) 13,564 13,778 11,214 -2,565 -18.6%
All Other DOD R&D 56,705 58,706 62,862 4,155 7.1%
Health and Human Services 29,125 29,074 29,020 -54 -0.2%
Nat’l Institutes of Health 27,838 27,768 27,768 0 0.0%
All Other HHS R&D 1,287 1,306 1,252 -54 -4.1%
NASA 10,197 11,394 12,245 851 7.5%
Energy 8,586 8,551 9,141 590 6.9%
Atomic Energy Defense R&D 4,134 3,983 4,006 23 0.6%
Office of Science 3,345 3,309 3,774 465 14.1%
Energy R&D 1,107 1,259 1,361 102 8.1%
Nat’l Science Foundation 4,102 4,175 4,523 348 8.3%
Agriculture 2,410 2,411 2,012 -399 -16.5%
Commerce 1,123 1,075 1,065 -10 -0.9%
NOAA 646 617 578 -39 -6.3%
NIST 446 424 451 27 6.4%
Interior 622 637 600 -37 -5.8%
U.S. Geological Survey 547 561 537 -24 -4.3%
Transportation 549 704 557 -147 -20.9%
Environ. Protection Agency 640 600 557 -43 -7.2%
Veterans Affairs 742 765 765 0 0.0%
Education 308 302 299 -3 -1.0%
Homeland Security 1,062 1,406 1,327 -79 -5.6%
All Other 729 773 767 -6 -0.8%
Total R&D 130,465 134,351 136,953 2,602 1.9%
Defense R&D 74,766 76,821 78,419 1,598 2.1%
Nondefense R&D 55,699 57,530 58,534 1,004 1.7%
Nondefense R&D excluding NASA 45,502 46,136 46,289 153 0.3%
Basic Research 27,598 27,855 28,197 343 1.2%
Applied Research 28,336 28,794 26,542 -2,251 -7.8%
Total Research 55,935 56,648 54,740 -1,908 -3.4%
Development 69,757 73,444 78,032 4,588 6.2%
R&D Facilities and Equipment 4,773 4,259 4,181 -78 -1.8

Source: AAAS, based on OMB data for R&D for FY 2007, agency budget justifications, and information from agency budget offices. Note: The projected inflation rate between FY 2006 and FY 2007 is 2.2 percent.

President, Congress unveil innovation initiatives

With his proposed ACI, President Bush has added to an intensifying debate on innovation policy now taking place on Capitol Hill. A variety of bills have been released, the most prominent of which is a package of three bills introduced by a bipartisan group of four senators. The bills are based on the recommendations in the National Academies’ Rising Above the Gathering Storm report.

Though the myriad of innovation initiatives vary in how they will advance U.S. competitiveness, they share the common themes of increasing investment in research and strengthening education. Because of the growing concern about U.S. dependence on foreign oil and the emphasis on energy security in the president’s State of the Union speech, the topic of new energy resources has become closely linked to the competitiveness arena.

As detailed in a white paper released by the Office of Science and Technology Policy, ACI advocates doubling research funding at NSF, NIST, and DOE’s Office of Science over a period of 10 years and increasing basic and applied research funding in DOD. ACI would make the R&D tax credit permanent and create Career Advancement Accounts of up to $3,000 for workers who are beginning, changing, or advancing in a career.

Although the emphasis on basic research was welcome news to the research community, it was the president’s science and mathematics education initiative that garnered the initial interest on the Hill. A few days after the release of the administration’s budget request, senators questioned Secretary of Education Margaret Spellings about the academic components of ACI at a February 9 hearing held by the Health, Education, Labor and Pensions Committee.

According to Spellings, the White House plan includes training 70,000 existing teachers over the next five years to teach Advanced Placement/ International Baccalaureate (AP/IB) classes in math, science, and foreign languages. An Adjunct Teacher Corps also would be created to train 30,000 science, technology, engineering, and math (STEM) professionals over 10 years to serve as part-time science and math teachers. In addition, the department plans to establish a National Math Panel, modeled on the National Reading Panel, to identify best teaching practices, and a Math Now program to provide primary school teachers with specialized mathematics training. The program would also offer middle-school teachers remedial math education tools that can be targeted at struggling students.

When asked why ACI places more emphasis on math than on science education, Spellings said that the Department of Education’s philosophy is “math now, science next.” She said that conducting research on how children learn math is currently the single most important step toward improving math and science education. However, the Bush proposal recommends that science testing be included in the evaluation for the “adequate yearly progress” that schools must meet under the No Child Left Behind (NCLB) Act when it is reauthorized next year. Spellings added that emphasis on science would increase in the future.

Five days before the president unveiled ACI, the Protecting America’s Competitive Edge (PACE) Acts, based on the National Academies report Rising Above the Gathering Storm, were introduced by Sens. Pete Domenici (R-NM), Jeff Bingaman (D-NM), Lamar Alexander (R-TN), and Barbara Mikulski (DMD). PACE is a package of three bills, each aligned with the jurisdiction of a committee with oversight of a specific theme: energy, education, and finance. Each bill already has more than 55 cosponsors.

The bills (S. 2197, S. 2198, and S. 2199) reflect many of the same initiatives outlined in the president’s proposal. For example, PACE would nearly double the NSF and DOE’s Office of Science R&D budgets, increase defense basic research funding, and train teachers to teach AP/IB science and math classes. In addition, the legislation provides scholarships to students majoring in STEM fields or who are preparing to teach these subjects at the K-12 level; awards grants to encourage university STEM departments to partner with departments of education to train future teachers; establishes summer programs and part-time master’s degree programs for existing teachers; and creates a clearinghouse for effective curriculum materials. The bills double and make permanent the current R&D tax credit and create new visas for students and workers in STEM fields.

The PACE-Energy bill also proposes creating an Advanced Research Projects Authority-Energy (ARPA-E), which is modeled on the Defense Advanced Research Projects Agency, to support groundbreaking energy research. Senator Hillary Clinton (D-NY) introduced a standalone bill, the Advanced Research Projects Energy Act (S. 2196) at the end of January.

The PACE bills complement the National Innovation Act (S. 2109) introduced in December 2005 by Sens. John Ensign (R-NV) and Joseph Lieberman (D-CT) to implement a series of recommendations included on the National Innovation Initiative report published by the Council on Competitiveness.

Whereas Senate innovation legislation has involved bipartisan support, House innovation agendas are split along party lines. The House Democratic leadership’s legislation was introduced in December 2005 by Rep. Bart Gordon (D-TN). Gordon’s three-bill package (H.R. 4434, 4435, and 4596) includes scholarships for math and science teachers, curriculum development, and professional development programs for teachers. The legislation increases federal funding for basic research in the physical sciences, mathematics, and engineering by 10% per year; provides more fellowships for graduate students and grants for early-career researchers; increases funding for federal and university laboratories; and creates an ARPA-E.

Rep. Adam Schiff (D-CA) introduced legislation identical to the bill supported by Sens. Ensign and Lieberman; however, the bill currently does not have cosponsors.

House Speaker Dennis Hastert (RIL) and Rep. Bob Goodlatte (R-VA) announced on March 1 a Republican innovation legislative package, entitled the Innovation and Competitiveness Act. Although programmatic and funding specifics were not detailed at the event, the goals of the bill are to promote R&D, increase the U.S. scientific talent pool, reduce bureaucratic red tape, reform the legal system, and enhance the health care system through innovative technologies.

Senators unveil discussion paper on climate policy

In a marked change from the usual partisan congressional debate about climate change, Sens. Pete Domenici (RNM) and Jeff Bingaman (D-NM), the chair and ranking member of the Senate Energy and Natural Resources Committee, released a joint paper that aims for consensus on key issues about the structure of a mandatory market-based greenhouse gas regulatory program. The senators are seeking comments on the document, which will be used to frame the discussion at an April 4 climate change conference, which is being sponsored by the committee and moderated by Domenici. The paper can be found on the committee Web site (energy.senate.gov/ public).

The paper raises the overarching question of who is to be regulated in a mandatory system and where regulations would be imposed in the greenhouse gas life cycle. The paper explores differences in costs and effectiveness in regulating “upstream” (closer to energy producers and suppliers) versus “downstream” (the point of emissions). The paper also examines whether some allowances should be allocated for free to mitigate costs or if they should be all be sold by auction.

The paper also highlights the importance of R&D investments in low-carbon energy technologies and adaptation assistance, exploring the use of permit revenue to fund such activities.

Finally, the paper raises questions about the U.S. role relative to actions taken elsewhere in the world. It asks if a system should be compliant with cap-and-trade systems being implemented in Europe and elsewhere. It also notes that “climate change is a global environmental problem that requires action by all major emitting countries.” The paper asks whether the United States should take some action and then make further steps contingent on the actions of other countries.


“From the Hill” is prepared by the Center for Science, Technology, and Congress at the American Association for the Advancement of Science (www.aaas.org/spp) in Washington, D.C., and is based on articles from the center’s bulletin Science & Technology in Congress.

Protecting the Best of the West

Once considered the leftovers of Western settlement and land grabs, the 261 million acres of deserts, forests, river valleys, mountains, and canyons managed by the federal Bureau of Land Management (BLM) are now in hot demand. Pressure to open more of these lands for oil and gas drilling has never been greater. Traditional uses of BLM lands, including logging, livestock grazing, and mining, continue. At the same time, expanding cities and suburbs juxtapose populations beside BLM lands as never before, and new technologies such as all-terrain vehicles make once-remote BLM lands widely accessible. Increasingly, the distinctive Western landscapes of BLM lands are a magnet for all who prize outdoor recreation—from hikers to off-road vehicle enthusiasts, from birdwatchers to hunters.

Congress, as well as past presidents and ordinary citizens, have realized (almost belatedly) that BLM lands are rich in unique characteristics that merit conservation: wildlife, clean water, cultural and historic relics, open space, awesome scenic vistas, and soul-nourishing solitude. In recognition of the need to protect the BLM lands with the greatest richness of natural and historical resources, the Clinton administration in 2000 designated 26 million acres as the National Landscape Conservation System (NLCS) to help keep these stellar areas “healthy, wild, and open.”

Now, conservationists of all stripes are watching the BLM closely. They ask: Can a federal agency historically attuned to maximizing resource development also address the challenge of conservation?

A recent assessment of the condition of the NLCS—and of the BLM’s stewardship of those lands—offers a litmus test. The Wilderness Society and the World Resources Institute jointly conducted the assessment and issued results in October 2005. Our report, State of the NLCS: A First Assessment, finds that the NLCS’s natural and cultural resources are at risk under the BLM’s oversight.

Fortunately, the assessment also offers good news: It is not too late for the BLM, the administration, and Congress to safeguard the public treasures of the NLCS. In order to ensure that the BLM becomes a model for conservation and scientific learning in some of the nation’s most special places, we recommend more funding and staffing, coupled with a commitment from leaders of the Department of the Interior, which oversees the BLM, to prioritize conservation on its premier Western lands. We also encourage a range of actions, including annual reporting and expanded volunteer programs, that would come at little cost to the agency or the federal budget.

From rags to riches

The federal government created the BLM in 1946 by combining the General Land Office and the Grazing Service. Today, the BLM manages more public land than the Park Service, Forest Service, or Fish and Wildlife Service. One-fifth of the land in states west of the Rocky Mountains falls under the BLM’s purview.

For decades, BLM lands were perceived as “the lands no one wanted” or areas most useful for cheap grazing and mineral extraction. Indeed, the BLM was known in some quarters as the “Bureau of Livestock and Mining.”

Yet, in fact, BLM lands are rich in a diversity of resources in addition to oil, gas, minerals, and rangeland.

Water. An estimated 65% of the West’s wildlife depends for survival on riparian areas: lush areas adjacent to waterways. The BLM administers 144,000 miles of riparian-lined streams and 13 million acres of wetlands.

Cultural resources. The BLM manages the largest, most diverse, and most scientifically important body of cultural resources of any federal land agency. Extensive evidence of 13,000 years of human history on BLM lands ranges from prehistoric Native American archaeological sites to pioneer homesteads from the 19th and early 20th centuries. With just 6% of BLM lands surveyed for cultural resources, 263,000 cultural properties have been discovered; archaeologists estimate there are likely to be 4.5 million sites on all BLM lands. The significance of and threats to these cultural resources were underscored in 2005 when the National Trust for Historic Places listed the entire NLCS as one of the nation’s most endangered historic places.

Paleontological resources. Fossils that are hundreds of millions of years old are preserved on BLM lands, and they provide important insight on topics such as the extinction of dinosaurs and the evolution of plant and animal communities.

Wildlife habitat. BLM lands are host to 228 plant and animal species listed as threatened or endangered and to more than 1,500 “sensitive” species. These lands provide 90 million acres of key habitat for big game such as antelope, mule deer, bighorn sheep, and elk. The lands also are important for 400 species of songbirds, and the future of sage grouse populations in the West will depend on the BLM’s protection of their habitat.

Ecosystem services. Native plants on BLM lands help to prevent the spread of costly invasive weeds, reduce the risk of wildfires, and minimize soil erosion to help keep waterways clean and healthy.

Natural playgrounds. Recreational opportunities abound on BLM lands. In 2004, some 54 million people visited these areas to hike, camp, picnic, hunt, fish, ride horses, raft, canoe, and use off-road vehicles.

Open space. BLM lands are increasingly valuable places to find solitude and silence. In the lower 48 states, nearly two-thirds of BLM lands are within an hour’s drive of urban areas, and 22 million people live within 25 miles of BLM lands.

BLM LANDS ARE RICH IN UNIQUE CHARACTERISTICS THAT MERIT CONSERVATION: WILDLIFE, CLEAN WATER, CULTURAL AND HISTORIC RELICS, OPEN SPACE, AWESOME SCENIC VISTAS, AND SOUL-NOURISHING SOLITUDE.

All of these values are hallmarks of the NLCS. The NLCS brings together many of the BLM’s most sensitive landscapes: National Monuments, National Conservation Areas, Wilderness Areas, Wilderness Study Areas, Historic Trails, and Wild and Scenic Rivers. According to former Secretary of the Interior Bruce Babbitt, the NLCS “was created to safeguard landscapes that are as spectacular in their own way as national parks.” Importantly, though, NLCS areas are intended to embody a different concept than national parks by minimizing visitor facilities and the evidence of civilization’s encroachment to provide visitors a chance to see the West through the eyes of the first native peoples and pioneers.

Unlike the National Park Service, with its clear mandate to conserve natural and historical resources, the BLM must manage its lands and waters for a variety of uses that can and do conflict. In 2004, the BLM reported that 224 million of its 261 million acres were available for energy and mineral exploration and development. The agency manages approximately 53,000 oil, gas, coal, and geothermal leases, and 220,000 hardrock mining claims. In addition, 159 million acres are in livestock grazing allotments. Another 11 million acres are forest, much of which the BLM manages for commercial logging, as in western Oregon, where 0.5 million of the 2.4 million forest acres are managed intensively for timber production.

Accordingly, the BLM maintains that it is a multiple-use agency, while acknowledging that conservation is part of its mission. Indeed, federal regulations make it clear that multiple use does not trump the need for the BLM to also meet conservation goals and manage for recreation, scenic, scientific, and historical values. Although the agency can allow resource development even if it will cause degradation, its discretion is not unlimited. Moreover, the agency holds considerable—but underused—authority to restrict environmentally adverse activities to protect the land and its flora and fauna.

The BLM’s conservation responsibility and authority derive primarily from the Federal Land Policy and Management Act of 1976. This legislation makes clear that rare and special places could be protected from other competing or damaging uses and that multiple use does not mean that every acre must or should be available for all uses. In this way, BLM lands taken as a whole serve multiple uses, leaving ample room, even an obligation, to manage special places with conservation goals as paramount over other uses.

In fact, the BLM has legal directives to preserve most of the 26 million acres of the NLCS, particularly National Monuments and Wilderness Areas. Although the designation of the NLCS itself carries no statutory authority, the individual pieces of the system were each designated via specific authorizing legislation that impart a specific conservation purpose. These include the Antiquities Act of 1906, the Wilderness Act of 1964, the Wild and Scenic Rivers Act of 1968, and the National Trails System Act of 1968.

Skewed policy agenda

The BLM’s policy agenda, however, has often been dominated by considerations that can work against conservation. The nation’s rising energy needs are placing particular pressures on BLM lands. In order to expedite oil and gas leasing and development, the agency is briskly leasing wild lands, despite a backlog of leases and drilling permits still unused by the oil and gas industry and record levels of permits issued nationally. Since 2003, the BLM has continually offered oil and gas leases on spectacular roadless lands in Utah and Colorado that have been identified (in many instances by the agency itself) as harboring wilderness values. More than 50,000 acres in proposed Colorado Wilderness Areas have been leased in the past two years alone, and more than 100,000 acres in Utah have been offered at lease auctions.

Recent BLM management plans open almost entire areas to oil and gas development. In three recent draft and final land-use plans affecting 8.6 million acres (Greater Otero Mesa in New Mexico, the Great Divide in Wyoming, and the Vernal Field Office in Utah), 97% of the total area is proposed to be open to oil and gas development. In an August 2005 speech to the Rocky Mountain Natural Gas Strategy Conference, Assistant Secretary of the Interior Rebecca Watson listed as a notable accomplishment that the BLM is processing applications for drilling permits in record numbers, with the current administration issuing more than 17,000 permits during the past four years, 74% more than the Clinton administration. The BLM estimates that it will process more than 12,000 drilling permits in the next fiscal year.

A June 2005 report by the Government Accountability Office found that the BLM’s rush to drill keeps the agency too busy to monitor and enforce clean air and water laws. During the past six years, the number of drilling permits issued annually by the BLM tripled from 1,803 to 6,399. Four of the eight BLM field offices that issued 75% of these drilling permits did not have any plans in place to monitor natural or cultural resources. The report noted that BLM staffers were too busy processing drilling permit applications to have time to develop the monitoring plans.

In addition, the BLM, like other federal land management agencies, has long been caught in the jobs-versus-the-environment debate, which creates pressure to keep public lands open to oil and gas development, mining, and logging. But recent economic analysis is helping to dispel the perception that conservation on public lands is incompatible with economic prosperity.

A 2004 study by the Sonoran Institute found that protected public lands, including BLM lands such as National Monuments, are increasingly important to the economy of western communities. The changing western economy means that historically important resource extraction sectors provide fewer jobs comparatively; personal income from resource industries such as mining, oil and gas development, and ranching represent just 8% of total personal income, down from 20% in 1970, although there is wide variation among states. Meanwhile, counties with protected public lands or close to protected public lands tend to have the fastest local economic growth. Areas in and around protected areas are most likely to attract business owners, an educated work force, producer services, investment income, retirees, and real estate development, which are factors of a diverse and growing economy. For example, since the designation of the BLM’s Grand Staircase-Escalante National Monument in Utah in 1996, neighboring Garfield County has experienced a shift from wages declining at a rate of 6% to wage rate growth of 7%, as well as declines in unemployment and significant growth in personal income. Still, as long as the mythology of jobs versus environment prevails, the BLM is vulnerable to pressure from rural western communities, politicians, and extractive industries, who argue that a federal emphasis on conservation will set up roadblocks to productive uses of natural resources.

A JUNE 2005 REPORT BY THE GOVERNMENT ACCOUNTABILITY OFFICE FOUND THAT THE BLM’S RUSH TO DRILL KEEPS THE AGENCY TOO BUSY TO MONITOR AND ENFORCE CLEAN AIR AND WATER LAWS.

Another factor muddling the picture is the BLM’s budget structure. With its many categories and subcategories, the structure effectively discourages program integration and limits budgetary accountability. For example, the NLCS receives funding from at least seven different budget categories and subcategories, making it difficult for the BLM and members of the public to calculate the amount of money devoted to the NLCS. There also is a significant mismatch between congressional budgets and the nature of the work that the BLM performs. The BLM’s work is governed by multiple-use mandates and is ecosystem-based. Ecosystem management is a multiyear process that requires secure, consistent funding and adequate data. Congressional budget authorizations, on the other hand, normally cover only one year at a time and thus pose a significant impediment to planning and implementing longer-term projects needed to restore or protect ecosystems.

Because the BLM doesn’t include a separate budget line for the NLCS within its agency budget, and because of reallocations of funding and other cuts during the year, it is difficult to determine the amount that was allocated to the NLCS in the past fiscal year (FY 2006), or this year (FY 2007). Best estimates, however, make it clear that the NLCS operates with bare-bones funding—probably about $42 million for FY 2006, with even less for FY 2007. For comparison, consider that the NLCS budget is roughly 2.5% of the BLM’s $1.8 billion budget, for 10% of the agency’s most precious lands and waters. The NLCS’s funding is less than half of the allocation for the BLM’s energy and minerals management program, for which $108 million was appropriated for FY 2006, with $135 million proposed for FY 2007. NLCS funding also is a fraction of the funding for comparable land management agencies. The 2006 budget for NLCS translates to about $1.70 per acre, compared to the roughly $5 per acre that goes to the National Wildlife Refuge System and $19 per acre to the National Park Service. Funding for land acquisition by the four major federal land management agencies, including the BLM, via the Land and Water Conservation Fund, has declined by 80% in the past decade.

Taking stock of stewardship

Good management practice dictates that the BLM should establish a regular means of assessing the condition of its special areas in order to provide early warning of change, make conservation a priority among its other important objectives, help determine budgets, and provide the public and Congress the means to gauge progress and hold the agency accountable.

The Government Performance and Results Act of 1993 (GPRA) provides an impetus for land management agencies to plan, implement, monitor, and report on progress toward performance goals. And, in keeping with the GPRA framework, the BLM’s 2004 annual report cites several goals and accomplishments for resource protection, such as statistics on acres of riparian land restored and cultural resources stabilized.

What these general overview data do not offer is a full picture of trends, conditions, and conservation stewardship capacity. In particular, the ability to gauge whether the BLM is meeting the unique conservation mandates of National Monuments and other places typically set aside to protect specific wildlife, plants, and their habitat, as well as large ecosystems and wilderness, is shortchanged. Nor does the BLM produce annual reports for individual National Monuments and Conservation Areas with consistent, regular, and quantitative measures of progress toward specific conservation goals.

In order to fill this void, the Wilderness Society and World Resources Institute undertook a preliminary assessment to determine whether the BLM is meeting its conservation mandate. We decided to focus on the NLCS because its areas carry a clear conservation aim via proclamation or legislation, and because, by mandate, the BLM currently is creating management plans that will institutionalize conservation objectives for areas within the system.

For simplicity’s sake, we kept the scope of our assessment relatively narrow. We focused on 15 specially designated areas or “units” in the NLCS, some selected to reflect geographic and ecosystem diversity and others selected randomly. We then homed in on issues relevant to stewardship and ecosystem condition, such as accountability, natural resource monitoring, cultural resource protection, and visitor management. For each issue, we identified a series of indicators and measures: 35 indicators in total. For example, we used the degree to which an area is fragmented by roads as one measure of ecosystem health.

Overall, we found that the BLM is woefully lacking in funds, leadership, and data to achieve its conservation mission on NLCS lands. In our report, we include a scorecard that summarizes our findings by issue and NLCS unit. Grades of C’s and D’s dominate for issues such as the capacity to protect wild and untouched areas and to monitor special natural resources. Although the NLCS as a whole scored no higher than a C for any issue, some individual areas merited A’s and B’s for select aspects of stewardship and conservation. In particular, we found:

An understaffed and inadequately empowered conservation system. NLCS managers (the BLM staffers responsible for the day-to-day management of individual NLCS units) have neither the stature nor the authority to serve as the public face of conservation for the BLM’s special landscapes and to ensure that conservation is prioritized by their agency. Only one-third of the managers interviewed are vested with line authority: the formal authority to direct staff, with clear, consistent responsibilities to make decisions, issue orders, and allocate resources.

Most BLM National Monuments and Conservation Areas are understaffed, mostly because of funding constraints. Most areas lack dedicated time from archaeologists, ecologists, law enforcement rangers, and public education specialists. For example, only one-third of the 15 Monuments and Conservation Areas examined have more than one full-time law enforcement ranger; several have only a half-time ranger. A ranger must patrol, on average, 200,000 acres, making it impossible to check remote areas or specific sites regularly. Growth in enforcement staff needs to keep pace with growth in use; in some areas, visitor numbers have quadrupled in the past five years.

Although most National Monuments were designated under the Antiquities Act for “scientific study” and many Conservation Areas offer excellent scientific learning opportunities for scientists, students, and members of local communities, few of them have the staff to capitalize on those objectives. About 80% of National Monuments and Conservation Areas have a public education specialist, but typically this is less than a full-time or even half-time outreach professional.As one BLM staff member said,“We always identify in our work plans that we’re going to use environmental education and interpretation as a major tool to get public compliance with land stewardship, but then we fail to fund environmental education, or try to add it to an already overburdened staff person.”

A paucity of natural resource monitoring and trend data. Large data gaps make it difficult, even impossible, for the BLM to effectively manage its conservation lands and waters. For example, only 4 of 15 National Monuments and Conservation Areas conducted complete inventories for invasive weeds, and rarely do Monuments and Conservation Areas have comprehensive water-quality monitoring programs.

Collecting more data is not always the priority need. Our queries of BLM staff suggest that in some places, much detailed data already is available on key indicators of resource condition. For example, the Headwaters Forest Reserve in California has summarized trend data for threatened and endangered species into an easy-to-interpret format. More often, however, the data are not rendered into useful information; they are not compiled, integrated, and analyzed to facilitate place-specific assessments by NLCS managers.

Data on recreational activities, which are important for gauging pressures on resources and deciding how many law enforcement rangers are needed, and where, are fraught with inconsistency. The BLM does track total visitors to each part of the NLCS, as well as nearly a dozen recreational uses. However, during the past five years, some NLCS units have changed how they measure visitor use, rendering trend data nearly useless.

Ecosystem health: Condition unknown. Data to assess ecosystem condition in the NLCS are poor, due in part to the lack of comprehensive and consistent place-specific monitoring programs. One significant concern is the degree to which wildlife habitat is fragmented by roads and routes. On average, 76% of land in the 15 areas examined is within one mile of a road, and 90% is within 2 miles of a road. Abundant research has demonstrated that roads can have a negative impact on wildlife at these distances, and they also facilitate damage from off-road vehicles, the invasion of non-native animal and plant species, and the spread of fires.

Available data reflect widely varying land health conditions systemwide. For example, 95% of the riparian areas assessed in Colorado’s Gunnison Gorge National Conservation Area were judged to be in “proper functioning condition” (meaning that they are able to minimize erosion, improve water quality, and support biodiversity). In contrast, only 7% of the streams in Colorado’s Canyons of the Ancients National Monument meet the proper functioning standard. Similarly, invasive species problems range from areas where nearly all of the land—tens of thousands of acres— is affected, to areas with virtually none affected.

Endangered cultural resources. The condition of cultural resources is difficult to summarize, because the BLM lacks the capacity to adequately monitor cultural sites. Indeed, the agency has comprehensively inventoried cultural resources in just 6 to 7% of the total area encompassed by Monuments and Conservation Areas. Some of the archaeologists interviewed thought the majority of their sites were in stable condition, but all described sites they knew were at risk, typically due to erosion, accessibility, looting, or careless campers.

The majority of cultural inventories are carried out when a drilling or grazing permit, power line, or other development is proposed and the BLM must meet its legal obligation to comply with the National Historic Preservation Act and assess impacts to cultural resources in those permit areas. With the rapid increase in permit application processing, too often these cultural resource compliance surveys are conducted late in the process: not when the agency is considering whether to lease, but after private investments have already been made. And, unfortunately, many BLM archaeologists report that the majority of their time (60 to 70%) is occupied by compliance work related to proposed development. Few have time or funds to undertake landscape-scale archaeological surveys in areas of highest priority to inform land-use plans, road closures, and the management of public access and recreation.

Reinventing a conservation agency

Elevating and advancing the BLM’s conservation mission, especially in the face of conflicting priorities and pressures, requires actions by the agency, Congress, and concerned stakeholders nationwide.

Among the steps that the BLM can take, some will require a shift in priorities, but most will require only modest amounts of funding. For example, the BLM should:

THE BLM IS WOEFULLY LACKING IN FUNDS, LEADERSHIP, AND DATA TO ACHIEVE ITS CONSERVATION MISSION FOR NLCS LANDS AND WATERS.

Undertake regular indicator-driven conservation assessments. The old business adage “you manage what you measure” applies equally to the BLM and conservation. Setting specific conservation goals for the NLCS and measuring progress toward them would help the agency focus on conservation as a priority, and reward progress. The indicators of progress need not be all or only the ones that we used in State of the NLCS; indeed, the BLM should engage in a process with nongovernmental organizations and other partners to agree on a set of measures for natural and cultural resource health. The agency should then commit to tracking those indicators at the NLCS unit level in annual or biennial reports. This would enable basic public oversight and foster informed participation in public lands planning, management, and protection. Reports undoubtedly would improve the public’s impression of the BLM as an accountable and capable conservation organization. (We recently learned that the BLM does plan to begin issuing annual reports on its NLCS National Monuments and Conservation Areas in 2006; it remains to be seen what data and quantitative measures of progress the reports will include).

Plan for resource conservation. The BLM is still crafting “Resource Management Plans” (the BLM’s term for land-use plans) for about half of its National Monuments and Conservation Areas. These plans, which serve as blueprints for decisionmaking for up to two decades, are a sterling opportunity to provide clear and unequivocal conservation guidance. For example, plans should give direction regarding species monitoring and water-quality monitoring, and they should include a cultural resource protection program. Also critical is the inclusion of a plan for roads and travel within the areas that minimizes damage from motorized vehicle use and closes unnecessary or damaging roads, with a specific time frame for closures.

Replicate best practices for conservation. The State of the NLCS report identifies more than a dozen laudable examples of BLM projects that are creatively improving or protecting resources in NLCS areas. For example, in Arizona’s Agua Fria National Monument, volunteers and students record petroglyphs, whereas in Idaho’s Snake River Birds of Prey National Conservation Area, BLM staff place signs and paths strategically to guide visitors away from overused campsites and reduce off-road driving to prime locations for raptor viewing. Also in Idaho, at Craters of the Moon National Monument, the BLM found that adding the image of an American flag to signs along roads and trails discourages the use of the signs for target practice, reducing the need for their costly replacement. To encourage such best practices, restoration or land protection ideas could be shared at an annual BLM “NLCS Conservation Congress” and highlighted with annual BLM conservation awards for outstanding personnel and projects.

Expand site steward programs and volunteer programs. More than half of the National Monuments and Conservation Areas examined benefit from strong and effective cultural resource stewardship programs that use volunteers— often archaeologists themselves—as site monitors, educators, and protectors of special places. These volunteers help shorthanded BLM staff and enhance the agency’s capacity to accomplish its goals. Volunteers in many areas also assist with natural resource protection and restoration, undertaking tasks such as removing invasive plants and converting unnecessary roads to foot, horseback, and mountain-bike use.

For its part, Congress can play a major role in reinventing the BLM as a conservation agency. It should:

Give the NLCS a statutory basis. Just as the National Park Service Organic Act of 1916 provides the Department of the Interior with a clear management mandate for parks, the NLCS needs a similar basic law to guide its management. Congress should provide that law in the form of an NLCS “organic” act giving BLM a clear mission of protecting the NLCS. An NLCS Act need not change existing uses of NLCS lands but could help prioritize and clarify the BLM’s conservation agenda.

Increase funding for BLM conservation. Congressional funding priorities should include appropriations for natural resource monitoring, cultural resources inventory and monitoring, habitat restoration, and law enforcement, particularly in areas where visitor use is growing most rapidly or resources are most fragile. Another priority is funding the implementation of Resource Management Plans for various NLCS units that the BLM is scheduled to complete in 2006 and 2007.

Additional funding for land acquisition also is critical for the BLM to fend off encroaching residential and commercial development of private inholdings, and to create buffer zones around its most special ecosystems. One source of revenue for conservation budgets could be some of the hundreds of millions of dollars generated from mineral development on BLM lands.

Reorient the BLM budget structure toward conservation. Congress should create a budget category for management activities devoted to conservation and ecological restoration. Currently, conservation funds are scattered in several diverse budget categories. A subcategory of the new conservation/ecological restoration budget category should be devoted to the NLCS.

This is a time of great challenge for the NLCS. Without BLM leadership, congressional funding, and citizen involvement, significant segments of the NLCS are likely to suffer serious degradation, possibly forever. The path forward is clear: It is up to the nation to seize the opportunity to protect some of its greatest public lands.

Straight Talk: Don’t “Dis” Chinese Science

Considering the worldwide attention being paid to the growing economic, technological, and scientific prowess of China, one would expect that the White House Office of Science and Technology Policy (OSTP) and the U.S. Department of State would be devoting significant attention to the annual meeting at which U.S. and Chinese leaders discuss scientific cooperation. That would be wrong. Those meetings are scripted affairs concerned more with protocol than science.

China trails behind only the United States and Japan in investment in science and technology (S&T), and its pool of scientifically trained human resources is also among the top three in the world. In 2003, China became the world’s largest recipient of foreign direct investment (FDI), surpassing the United States. Chinese manufactured goods are everywhere. Lenovo Computers, a Chinese firm spun off from the Chinese Academy of Sciences, recently concluded an agreement to produce IBM personal computers. As New York Times columnist Thomas L. Friedman has put it, in a period of 30 years we will have witnessed the transition from “sold in China” to “made in China” to “designed in China” to “dreamed up in China.”

China’s growing strengths and capabilities should be neither surprising nor alarming. This is what one should expect from a nation that has always respected learning and that now feels ready to assume its proper place in the world. What should alarm us, however, is how ill-equipped the United States is to understand these developments and their implications.

The U.S. government’s apparatus for focusing on China’s S&T, let alone on its growing capacity for innovation, is woefully inadequate and scattered. OSTP is nominally in charge of conducting the U.S.-China Joint Commission Meeting (JCM) on Cooperation in Science and Technology roughly every two years under the U.S.-China S&T Agreement. The State Department’s Office of Environment, Science, and Technology (OES) is also involved. The JCM is supposed to provide policy leadership to the government-to-government S&T relationship. However, what actually happens is quite different.

Several months before an impending JCM, the OES begins holding meetings of the ad hoc Interagency Working Group on U.S.-China S&T Cooperation, which includes representatives from the key S&T agencies. A central goal of the interagency meetings is to review suggestions for increased cooperation—usually coming from the Chinese side—and to elicit suggestions from the U.S. agencies. On the U.S. side, no funds are budgeted for cooperative activities with China, so it is a challenge for the agency representatives to come up with creative ideas for enhancing the relationship. Usually they search through their already funded domestic programs to see what areas can be beneficially shared with Chinese counterparts.

The lack of funding is so severe that the results can be farcical. For example, neither OSTP nor State has enough money even to host a dinner for the foreign guests. The hat is usually passed around to the better-funded agencies, who themselves must find creative ways of providing the paltry amounts needed. The amount of staff time spent in doing this would shock the public.

The U.S. private sector has much more extensive interaction with China, but the United States does not even monitor this interaction consistently to ensure that U.S. interests are being protected. During the 1970s, before formal relations were established with China, government agencies and private foundations supported the Committee on Scholarly Communication with the People’s Republic of China (CSCPRC), which was located in the National Academy of Sciences and composed of eminent scientists and China scholars. The committee played an active role in guiding the S&T relationship, but once diplomatic relations were established in 1979, government agencies began to withdraw their financial support under the assumption that the government itself would be doing what the committee did. That did not happen.

One of the few activities focusing on S&T relations with China is the U.S.-China Economic and Security Review Commission, which examines whether joint activities could provide China with S&T information beneficial to its military. This hardly qualifies as cooperation. Universities are not much better. Although there are specialized academic experts who understand where China is heading in terms of economy, security, commerce, law, and trade, the number of scholars who know anything about China’s S&T is woefully inadequate. Most of them are aging, and there is no one in the pipeline to replace them. At the same time, there are hundreds of U.S. scientists and engineers working on their own initiative with Chinese colleagues on cooperative research projects. Government makes no effort to tap their connections or insights.

A few encouraging indications of what can be done do exist. For the past six years, the National Science Foundation has supported a George Mason University program that sponsors a series of policy dialogues at which leading Chinese and U.S. scientists, engineers, and government representatives explore the important global science issues for the knowledge-based economies of the 21st century. OSTP, recognizing the value of such dialogues, has agreed with the Ministry of Science and Technology of China to hold a comparative science policy dialogue at the next meeting of the JCM. Texas A&M University organized an extremely effective meeting in November 2003 of several hundred U.S. and Chinese researchers and government officials to review U.S.-China relations past, present, and future and to explore ways of working together. Most recently (September 2005), the Levin Institute of the State University of New York launched its Center for the Study of Science, Technology and Innovation in China (CSTIC). These efforts are pointed in the right direction, but much more is needed.

Other countries have moved much more quickly in initiating creative programs with the Chinese. France has established joint laboratories with the Chinese Academy of Sciences on catalysis and public health. The Deutsche Forschungsgemeinschaft has built the Sino-German Center for Research Promotion on the grounds of the National Natural Science Foundation of China, where German scholars and Chinese scholars may interact. European Union scholars are actively pursuing a common dialogue on the impact of China’s growing capacities in these areas. Britain, France, and Germany are actively recruiting Chinese students and scholars to their universities and research laboratories. The Chinese are understandably turning to these countries for the development of new partnerships.

The U.S. government should initiate a national dialogue with representatives from industry and academia to establish a strategy for ensuring that the nation has the capacity to understand what is happening in China and to help develop joint activities of benefit to both countries. It should establish academic centers that will educate the next generation of scholars capable of understanding global science, technology, and innovation and their broad impact on all important aspects of our global relationships, including the spread of democracy, security, trade, commerce, and politics. Such centers would bring together scientists and engineers engaged in cooperation with other countries, scholars and students who understand the broad importance of these relationships, and industry representatives engaged in global competition so that all may benefit from each others’ viewpoints and expertise. The centers could also produce the research and expertise that the government needs to help it guide its S&T relationships with other countries, such as China, and to make it possible to capitalize quickly on opportunities to work with other countries. Although this will not solve all the problems, it would be an important component of building an overall strategy. Future U.S. leadership in science, technology, and innovation will depend to a growing extent on its ability to cooperate creatively with other S&T powers, and China will undoubtedly be among them.

The U.S. Energy Subsidy Scorecard

In his State of the Union address on January 31, 2006, President Bush called for more research on alternative energy technologies to help wean the country from its oil dependence. The proposal was not surprising: After all, R&D investment has long been a staple of government efforts to deal with national challenges.

Yet despite its prominent role in the national debate, R&D has constituted a relatively small share of overall government investment in the energy sector since 1950. According to our analysis, the federal government invested $644 billion (in 2003 dollars) in efforts to promote and support energy development between 1950 and 2003. Of this, only $60.6 billion or 18.7% went for R&D. It was dwarfed by tax incentives (43.7%).

Indeed, our analysis makes clear that there are diverse ways in which the federal government has supported (and can support) energy development. In addition to R&D and tax policy, it has used regulatory policy (exemption from regulations and payment by the federal government of the costs of regulating the technology), disbursements (direct financial subsidies such as grants), government services (federal assistance provided without direct charge), and market activity (direct federal involvement in the marketplace).

SURPRISES ABOUND. TAX SUBSIDIES OUTPACE R&D SPENDING.SOLAR R&D IS WELL FUNDED. OIL PRODUCTION IS THE BIG WINNER.COAL RECEIVES ALMOST AS MUCH IN TAX SUBSIDIES AS IT DOES FOR R&D. NUCLEAR POWER RECEIVES MUCH LESS THAN COAL FOR R&D.

We found that R&D funds were of primary importance to nuclear, solar, and geothermal energy. Tax incentives comprised 87% of subsidies for natural gas. Federal market activities made up 75% of the subsidies for hydroelectric power. Tax incentives and R&D support each provided about one-third of the subsidies for coal.

As for future policy, there appears to be an emerging consensus that expanded support for renewable energy technologies is warranted. We found that although the government is often criticized for its failure to support renewable energy, federal investment has actually been rather generous, especially in light of the small contribution that renewable sources have made to overall energy production. As the country maps out its energy plan, we recommend that federal officials pay particular attention to renewable energy investments that will lead to market success and a larger share of total supply.

The power of tax incentives

Policies that allowed energy companies to forego paying taxes dwarfed all other kinds of federal incentives for energy development. Tax policy accounted for $281.3 billion of total federal investments between 1950 and 2003, with the oil industry receiving $155.4 billion and the natural gas industry $75.6 billion.

Distribution of Federal Energy Incentives by Type, 1950-2003

Source: Management Information Services, Inc.

The dominance of oil

The conventional wisdom that the oil industry has been the major beneficiary of federal financial largess is correct. Oil accounted for nearly half ($302 billion) of all federal support between 1950 and 2003.

Distribution of Federal Energy Incentives among Energy Sources, 1950-2003

Source: Management Information Services, Inc.

Renewable energy not neglected

The perception that the renewable industry has been historically shortchanged is open to debate. Since 1950, renewable energy (solar, hydropower, and geothermal) has received the second largest subsidy—$111 billion (17%), compared to $63 billion for nuclear power, $81 billion for coal, and $87 billion for natural gas.

Federal R&D Expenses for Selected Technologies, 1976-2003

LEGEND: PV: Photovoltaic (renewable); ST: Solar Thermal (renewable); ANS: Advanced Nuclear Systems; CS: Combustion Systems (coal); AR&T: Advanced Research and Technology (coal);LWR: Light Water Reactor ☢; Mag: Magnetohydrodynamics (coal); Wind: Wind Energy Systems (renewable); ARP: Advanced Radioisotope Power Systems ☢.

Source: Management Information Services, Inc.

Cost/benefit mismatch

Considerable disparity exists between the level of incentives received by different energy sources and their current contribution to the U.S. energy mix. Although oil has received roughly its proportionate share of energy subsidies, nuclear energy, natural gas, and coal may have been undersubsidized, and renewable energy, especially solar, may have received a disproportionately large share of federal energy incentives.

Federal Energy Incentives through 2003 Compared to Share of 2003 U.S. Energy Production

Source: Management Information Services, Inc.

Skewed R&D expenditures

Recent federal R&D expenditures bear little relevance to the contributions of various energy sources to the total energy mix. For example, renewable sources excluding hydro produce little energy or electricity but received $3.7 billion in R&D funds between 1994 and 2003, whereas coal, which provides about one-third of U.S. energy requirements and generates more than half of the nation’s electricity, received just slightly more in R&D money ($3.9 billion). Nuclear energy, which provided 10% of the nation’s energy and 20% of its electricity, was also underfunded, receiving $1.6 billion in R&D funds.

Federal R&D Energy Expenditures, 1994-2003, Compared to 2003 U.S. Electricity Production

Source: Management Information Services, Inc.

For What the Tolls Pay: Fair and Efficient Highway Charges

Hydrogen cars, expensive oil, fuel efficiency standards, and inflation frighten those interested in maintaining and improving U.S. highways. All of these forces could erode the real value of fuel taxes that now are the largest single source of funding for highway programs and an important source of transit funding as well. Because of this worry, the Transportation Research Board convened a committee to carefully examine the future of the fuel tax.

The committee uncovered both good and bad news. The good news is that there is nothing structurally wrong with the fuel tax that will cause the real value of revenues to decline dramatically over the next couple of decades. The bad news is that it is a very crude way to raise revenues for our highway system. Switching to per-mile fees, the committee concluded, would be a much more efficient and equitable approach.

Looking at the good news first, worries that alternative fuels and improving fuel efficiency will undermine the finance system are definitely exaggerated. Radical improvements in efficiency will take a long time to develop and be implemented, and even less radical improvements, such as hybrid engines, affect fuel consumption very slowly because it takes so long for new models to replace old models in the U.S. car fleet. Moreover, Americans are addicted to oil partly because they are addicted to power. If you make an engine more efficient, they will want it bigger. Consequently, improving technology does not reduce real fuel tax revenues per vehicle mile nearly as much as one might think. Indeed, they have been roughly constant for a long time.

One cannot be quite as certain regarding the future price of oil. There is some possibility that demand may erode because of an upward trend in the price of gasoline. Department of Energy projections (which have been generally consistent with those from other prominent sources) are optimistic that the price of oil will not surge over the next 15 years or so. But it must be admitted that energy experts did not anticipate the recent price increase to over $60 per barrel.

However, the evidence strongly suggests that recent oil price increases are as much the result of geopolitical forces as they are the result of fundamental supply shortages. It is true that China and India are becoming major oil consumers as they grow rapidly, but it is also true that supplies are increasing. There may be limited supplies of the type of oil that we pump from the ground today, but as one expert puts it, the sources of oil will just become heavier and heavier. If light crude runs out, we’ll turn more to heavy crude. If that becomes scarce, tar sands will be exploited more fully, and if they become expensive, we’ll turn to oil shale. In the process, oil will become more expensive, but it will be a slow process. Of course, wars, boycotts, and other disturbances can cause major price spurts that make optimistic forecasts look foolish, but one has no choice but to base long-run forecasts on fundamental trends, and they are not alarming.

The imposition of severe fuel efficiency standards could upset the gasoline-powered apple cart, but new radical regulation seems politically implausible in the near future. Currently, our two political parties are so closely competitive that no one wants to ask the American people to make major sacrifices. We may be addicted to oil, as the president suggests, but as Mae West remarked, “Too much of a good thing can be wonderful.”

Inflation concerns

The possibility of accelerating inflation raises more of a political as opposed to a technical concern. The federal fuel tax is a unit tax. That is to say, it does not vary with the price of gasoline as would a percentage sales tax. Inflation therefore erodes the purchasing power value of the tax. Some, like the Chamber of Commerce (in the National Chamber Foundation’s 2005 report Future Highway and Public Transportation Finance), have suggested indexing the tax for inflation. However, that solution may not be politically sustainable. Politicians at the state and local levels often suspend indexing if it becomes the least bit painful.

AN IMPROVED PRICING SYSTEM NOT ONLY HAS THE POTENTIAL FOR GREATLY INCREASING THE EFFICIENCY OF USING EXISTING ROADS, IT CAN ALSO BE HELPFUL IN GUIDING THE ALLOCATION OF NEW HIGHWAY INVESTMENT.

Historically, federal and state politicians have compensated for inflation by periodically raising tax rates. There is some question whether this is possible in the severe antitax climate in which we live today, but if this is a problem, it has nothing to do with the basic structure of the fuel tax. It is a political problem afflicting all forms of taxation.

But it should also be noted that politicians have not been strongly pressured by inflation in recent years. First, the inflation rate has been extremely low by historical standards. Second, at the federal level, the government has been able to capture additional revenues for the highway system without raising tax rates. In 1993, the federal gas tax was increased for the express purpose of reducing the deficit. The proceeds were not to be spent on highways or anything else. In 1997, those revenues were redirected into the highway trust fund and are now available to finance highway expenditures. More recently, an ethanol subsidy that was previously financed out of the highway trust fund will, in the future, be financed out of general revenues, thus releasing more resources for highways.

Congress may now have run out of such devices for increasing federal highway funding, which supports about a quarter of all highway spending. It will be interesting to see how Congress reacts in the future, especially if inflation accelerates a bit. In addition, many think that the most recent federal highway bill will more than spend the earmarked revenues that are available, although this is a controversial issue. If true, that, along with more inflation, may pressure Congress to return to its historical practice of occasionally raising the fuel tax when the federal highway program is reauthorized.

Per-mile fees

Although there are few reasons to fear a rapid erosion of fuel tax revenues in the near future, major revenue increases also seem unlikely. Congress and the state legislatures could raise more revenue with the gas tax if they chose to do so, but the political opposition is formidable. That makes it unlikely that enough will be spent in the near future to improve highway quality significantly, and the nation will have to continue to live with the current level of congestion. But relying solely on increased highway expenditures to reduce congestion is probably not cost-effective. Congestion must also be attacked by imposing extra costs on those who cause it.

Whether the nation just wants to maintain the quality of the current system or to improve it, there is good reason to reform our current approach to financing. In searching for alternatives, there is a strong argument for sticking with the established principle that users should pay and that the resulting revenues should be dedicated to highway expenditures. The revenues collected should be related to the costs that the vehicle imposes on the system, including congestion costs. In an extreme version of the principle, all the revenues and no more should be spent on highways, but the present practice of dedicating some revenues to mass transit certainly is defensible, because mass transit expenditures benefit highway users by reducing congestion.

The current fuel tax is only vaguely related to the amount of wear and tear that a vehicle imposes on the road, and it does not vary with the level of congestion. Per-mile fees that vary with the type of vehicle and time of day would be much more efficient and equitable.

Fifteen years ago, it was not possible to think about collecting per-mile fees efficiently. Costs included constructing tollbooths, paying toll takers, and most important, waiting in line at the tollgate. New technology holds the promise of virtually eliminating such costs.

In the immediate future, developments such as the EZpass electronic toll collection system (used on many toll roads and bridges throughout the northeastern states) and license plate imaging greatly increase the opportunities for tolling at low cost. We should exploit these opportunities to the extent possible.

In the longer run, global positioning system (GPS) technology makes it theoretically possible to charge for every road in the country, with fees varying by type of vehicle and the level of congestion. Of course, we may never wish to go that far, and much research is necessary before committing to that path. It is necessary to determine what type of technology is most efficient and to develop safeguards that will assure the public that their privacy will be protected. It is also important to resolve the many problems that will arise as we move from the current system of financing to something completely new. The necessary technology is not costless to develop, but it is very cheap. It is possible that GPS systems will be installed in almost all new cars in the near future, even if they are not required for the purpose of levying a per-mile fee.

The president’s 2007 budget proposal agrees that new forms of highway funding are desirable. It requests $100 million for a pilot program to involve up to five states in evaluating more efficient pricing systems. The necessary research has already started with an experiment in Oregon, and the Germans have initiated a GPS system for levying fees on trucks on the Autobahn, the national motorway system.

An improved pricing system not only has the potential for greatly increasing the efficiency of using existing roads, it can also be helpful in guiding the allocation of new highway investment. If a certain segment of road is yielding revenues far in excess of the cost of building it, it is a pretty good indication that an expansion of capacity in the area is warranted. If, on the other hand, revenues are not sufficient to pay for costs, any request for new construction should be critically examined.

Although such a system holds the promise of implementing the economist’s dream of perfectly pricing the highway system, it would be naïve to believe that a perfect system could ever be implemented. The per-mile fees will be set by politicians operating in a political environment. There will be strong pressures to keep fees low just as there are pressures today to avoid fuel tax increases. In some cases, there will be legitimate arguments for subsidies. For example, the nation may choose to subsidize rural road networks much as it now subsidizes mail service to rural areas.

The equity argument

Many will question charging per-mile fees out of a concern that it will impose a special hardship on the poor. As the notion of charging for road use is discussed more and more, there are many derogatory comments about “Lexus lanes,” as though only the rich would benefit from a reduction in congestion. It can be noted that it is frequently extremely important for poorer people to get to work on time or to pick up their kids from childcare before overtime fees are charged. But such arguments do not resolve the problem. Some people will be worse off as the result of a per-mile fee, and some of the people who are worse off will be poor.

It is not uncommon to face tradeoffs between economic efficiency and a concern for equity. But there are better ways to protect the poor than to prevent a major improvement in the efficiency of our transportation system. If it is determined that fees particularly hurt the poor—and more research on this question is probably warranted, given that the poor also pay the current fuel tax—policies that make the earned income credit or other welfare programs more generous can be considered. If it is deemed desirable to target additional assistance more precisely on poor highway users, a toll stamp equivalent to the food stamp program might be contemplated, although administrative costs would be very high. It may not be worth it to try for very precise targeting. The basic point is that there are other ways to deal with poverty that are more efficient than not charging properly for roads.

Expanding tolling now would acquaint people with the concept. It is easier to start levying tolls on specific lanes when there are alternative lanes that are free. That will make the public aware of the benefits of congestion pricing. If there are no howls of anguish, politicians might be less inclined to oppose road pricing.

Many years ago, the economist William Vickery began extolling the virtues of per-mile fees that would vary with the level of congestion. Having been trained in engineering originally, he went so far as to provide detailed discussions of complex systems that would put wires under the street for the purpose of measuring the distance traveled by particular cars at different times of the day. He died tragically just before traveling to Stockholm to receive the Nobel Prize in economics. At the time, we were on the edge of developing new technology that could turn his dream into a practical reality at low cost. Wherever he is, he must be smiling.

Let the Internet Be the Internet

Now that the Internet has become a keystone of global communications and commerce, many individuals and institutions are racing to jump in front of the parade and take over its governance. In the tradition of all those short-sighted visionaries who would kill the goose who lays the golden eggs, they seem unable to understand that one reason for the Internet’s success is its unique governance structure. Built on the run and still evolving, the Internet governance system is a hearty hybrid of technical task forces, Web-site operators, professional societies, information technology companies, and individual users that has somehow helped to guide the growth of an enormous, creative, flexible, and immensely popular communications system. What the Internet does not need is a government-directed top-down bureaucracy that is likely to stifle its creativity.

The call to “improve” Internet governance was heard often at the United Nations (UN)–organized November 2005 World Summit on the Information Society (WSIS) in Tunis, which was a followup to the December 2003 summit in Geneva. Although many different topics were on the agenda in Geneva and Tunis, by far the largest amount of controversy (and press coverage) was generated by debates over Internet governance. The summit participants had very different ideas about how the Internet should be managed and who should influence its development. Many governments were uncomfortable with the status quo, in which the private companies actually building and running the Internet have the lead role. One hot-button issue was the management of domain names, which today is overseen by the International Corporation for Assigned Names and Numbers (ICANN), an internationally organized nonprofit corporation. A number of countries feel that the U.S. government exerts too much control over ICANN through a memorandum of understanding between ICANN and the U.S. Department of Commerce. As a result, a number of proposals were put forward to give governments and intergovernmental organizations, such as the UN, more control over the domain-name system.

But the debate over ICANN was just part of a much bigger debate over who controls the Internet and the content that flows over it. At the Geneva Summit, a UN Working Group on Internet Governance (WGIG) was created to examine the full range of issues related to management of the Internet, which it defined as “the development and application by governments, the private sector and civil society, in their respective roles, of shared principles, norms, rules, decision-making procedures, and programmes that shape the evolution and use of the Internet.”

This definition would include the standards process at organizations such as the Internet Engineering Task Force (IETF), the International Telecommunication Union (ITU), and the World Wide Web Consortium (W3C), as well as dozens of other groups; the work of ICANN and the regional Internet registries that allocate Internet protocol addresses; the spectrum-allocation decisions regarding WiFi and WiMax wireless Internet technologies; trade rules regarding e-commerce set by the World Trade Organization; procedures of international groups of law enforcement agencies for fighting cybercrime; agreements among Internet service providers (ISPs) regarding how they share Internet traffic; and efforts by multilateral organizations such as the World Bank to support the development of the Internet in less developed countries. (A very useful summary of the organizations shaping the development and use of the Internet has been created by the International Chamber of Commerce at http://iccwbo. org/home/e_business/Internet%20governance.asp.)

The main reason why the Internet has grown so rapidly and why so many powerful applications can run on it is because the Internet was designed to provide individual users with as many choices and as much flexibility as possible, while preserving the end-to-end nature of the network. And the amount of choice and flexibility continues to increase. Because there are competing groups with competing solutions to users’ problems, users, vendors, and providers get to determine how the Internet evolves. The genius of the Internet is that open standards and open processes enable anyone with a good idea to develop, propose, and promote new standards and applications.

THE FARMER IN CENTRAL AFRICA, THE TEACHER IN THE ANDES, OR THE SMALL MERCHANT IN CENTRAL ASIA DOES NOT CARE ABOUT WHERE ICANN IS INCORPORATED OR HOW IT IS STRUCTURED.

The governance of the Internet has been fundamentally different from that of older telecommunications infrastructures. Until 20 to 30 years ago, governance of the international telephone system was quite simple and straightforward. Governments were in charge. In most countries, they either ran or owned the monopoly national telephone company. Telephone users were called “subscribers,” because like magazine subscribers they subscribed to the service offered at the price offered and did not have much opportunity to customize their services. When governments needed to cooperate or coordinate on issues related to international telephone links, they worked through the ITU.

The model for Internet governance is completely different. At each level, there are many actors, often competing with each other. As a result, users—not governments and phone companies—have the most influence. Hundreds of millions of Internet users around the world make individual decisions every day, about which ISP to use, which browser to use, which operating systems to use, and which Internet applications and Web pages to use. Those individual decisions determine which of the offerings provided by thousands of ISPs, software companies, and hardware manufacturers succeed in the marketplace and thus determine how the Internet develops. Users’ demands drive innovation and competition. Governments already have a powerful influence on the market because they are large, important customers and because they define the regulatory environment in which companies operate. Because the Internet is truly global, there is a need for coordination on a range of issues, including Internet standards, the management of domain names, cybercrime, and spectrum allocation. But these different tasks are not and cannot be handled by a single organization, because so many different players are involved. Another difference is that unlike the telephony model, where a large number of telephony-related topics (such as telephone technical standards, the assignment of telephone country codes, and the allocation of cellular frequencies) are handled by the ITU, an intergovernmental organization, most international Internet issues are dealt with by nongovernmental bodies, which in some cases are competing with each other.

In many ways, the debate over ICANN and the role of governments in the allocation of domain names can be seen as a debate between these two different models of governance: the top-down telephony model and the bottom-up Internet model. In the old telephony model, the ITU, and particularly the government members of the ITU, determined the country codes for international phone calls, set the accounting rates that fixed the cost of international phone calls, and oversaw the setting of standards for telephone equipment. National governments set telecommunications policies, which had a huge impact on the local market for telephone services and on who could provide international phone service.

Today, Internet governance covers a wider range of issues, and for most of these issues the private sector, not governments, have the lead role. In contrast to telephony standards, which are set by the ITU, Internet standards are set by the Internet Engineering Task Force, the World Wide Web Consortium, and dozens of other private-sector–led organizations, as well as more informal consortia of information technology companies. However, some members of the ITU, as part of its Next Generation Networks initiative, are suggesting that the ITU needs to develop new standards to replace those developed at the IETF and elsewhere.

Likewise, the ITU is not content to have the price of international Internet connections determined by the market. For more than seven years, an ITU working group has been exploring ways in which the old accounting rates model for telephony might be adapted and applied to the Internet. Ironically, the ITU pricing mechanism has already had an effect on the Internet. Exorbitant international phone rates, which can be more than a dollar per minute in some countries, have given a big boost to the use of voice over Internet protocol (VOIP) services, which allow computer users to make phone calls without paying per-minute fees. During the time that the ITU has been discussing ways to regulate the cost of international Internet connection, in most markets the cost of international broadband links has plummeted by 90 to 95%. This apparently was not good enough for many WSIS participants, who insisted that regulation was needed to bring down user costs.

ALL WHO CARE ABOUT THE INTERNET NEED TO WORK TOGETHER TO FIND WAYS TO STRENGTHEN THE BOTTOM-UP MODEL THAT HAS SERVED THE INTERNET AND THE INTERNET COMMUNITY SO WELL.

WSIS participants also offered a number of proposals to have governments and the ITU take a larger role in regulating the applications that run over the Internet. For instance, several governments called for regulatory action to fight spam and digital piracy, protect online privacy, enhance consumer protection, and improve cybersecurity. Of course, Internet users and managers are addressing all these issues in a variety of ways, and a robust market exists for security tools and services. As a result, users have many options from which to select what works best for them. In contrast, some governments are talking about the need for comprehensive, one-size-fits-all solutions to spam, digital rights management, or cybercrime. Imposing this kind of rigid top-down solution on the Internet would have the undesirable side effect of “freezing in” current technological fixes and hindering the development of more powerful new tools and applications. Even more disturbing, in many cases the cure would be worse than the disease, because solutions proposed to limit spam or fraudulent content could also be used by governments to censor citizens’ access to politically sensitive information.

The debate over Internet governance is really about the future direction of the Internet. One outcome of the Tunis Summit was the creation of the Internet Governance Forum (IGF), a multistakeholder discussion group that will examine how decisions about the future of the Internet are made. Those advocating a greater role for governments in managing the Internet will continue to press their case at the IGF. The debate over ICANN will provide the first indication of where the discussion is heading. If a large majority of governments decide that ICANN should be replaced by an intergovernmental body or that government should have more say in ICANN decisionmaking, we can expect to hear more calls for greater government regulation in a wide range of areas, from Internet pricing to content control to Internet standards.

Fortunately, the Tunis Summit also exposed many government leaders to a broader understanding of how the Internet is governed and how it can contribute to the well-being of people throughout the world. They learned that the ICANN squabble is a relatively minor concern among the challenges that confront the Internet. The farmer in central Africa, the teacher in the Andes, or the small merchant in Central Asia does not care about where ICANN is incorporated or how it is structured. But they care about the cost of access and whether they can get technical advice on how to connect to and use the Internet. They care about whether the Internet is secure and reliable. They care about whether there are useful Internet content and services in their native language. And in many countries, they care about whether they’ll be thrown in jail for something they write in a chat room.

As the national governments, companies, nongovernmental organizations, and others involved in WSIS work to achieve the goals agreed to in Tunis, they should use the organizations that are already shaping the way the Internet is run. The existing Internet governance structure has repeatedly demonstrated its capacity to solve problems as they arise. Rather than discarding what has proven successful, world leaders should be trying to understand how it has succeeded, explaining this process to stakeholders and the public so that they can be more effective in participating in the process, and using the lessons of the past in approaching new problems. For instance, the IETF has set many of the fundamental standards of the Internet, and it is in the best position to build on those standards to continue improving Internet performance. As more people want to participate in standard setting, the IETF needs to explain to the new arrivals how it operates. To help in this effort, the Internet Society has started a newsletter to help make the IETF process more accessible and to invite input from an even larger community. The IETF is open to all. It is in not even necessary to come to the three meetings that the IETF holds each year, because much of the work is done online.

Other Internet-related groups are also eager to find ways to ensure that their work and its implications are understood and supported by the broadest possible community. They should follow the IETF example by making standards and publications available for free online and by publishing explanations of what they do in lay language. They could convene online forums where critical issues are discussed and where individuals and government representatives could express their views. As part of the preparation for the June 2005 World Urban Forum in Vancouver, the UN staged HabitatJam, a three-day online forum that attracted 39,000 participants. It could certainly do the same for Internet issues.

Ten or 15 years ago, when the Internet was still mostly the domain of researchers and academics, it was possible to bring together in a single meeting most of the key decisionmakers working on Internet standards and technology as well as the people who cared about their implications. That is no longer possible, except by using the Internet itself. The Internet Society is already starting to reach out to other organizations to explore how such public events could be organized.

Before trying to reinvent Internet governance, those who are unhappy with some Internet practices or who see untapped potential for Internet expansion should begin by using the mechanisms that have proved effective for the past two decades. The Internet continues to grow at an amazing pace, new applications are being developed daily, and new business models are being tried. The current system encourages experimentation and innovation. The Internet has grown and prospered as a bottom-up system. A top-down governance system would alter its very essence. Instead, all who care about the Internet need to work together to find ways to strengthen the bottom-up model that has served the Internet and the Internet community so well.

Two years ago, at a meeting of the UN Information and Communication Technologies Task Force in New York, Vint Cerf, the chairman of ICANN, said, “If it ain’t broke, don’t fix it.” Some people have misinterpreted his words to mean that nothing is wrong and nothing needs to be fixed. No one believes that. We have many issues to address. We need to reduce the cost of Internet access and connect the unconnected; we need to improve the security of cyberspace and fight spam; we need to make it easier to support non-Latin alphabets; we need to promote the adoption of new standards that will enable new, innovative uses of the Internet; and we need better ways to fighting and stopping cybercriminals.

The good news is that we have many different institutions collaborating (and sometimes competing) to find ways to address these problems. Many of those institutions—from the IETF to ICANN to the ITU—are adapting and reaching out to constituencies that were not part of the process in the past. They are becoming more open and transparent. That is helpful and healthy, but we need to continue to strive to make it better. In particular, it would be very useful if funding could be found so that the most talented engineers from the developing world could take more of a role in the Internet rulemaking bodies, so that the concerns of Internet users in those countries could be factored into the technical decisions being made there.

The debate about the future of the Internet should not begin with who gets the impressive titles and who travels to the big meetings. It should begin with the individual Internet user and the individual who has not yet been able to connect. It should focus on the issues that will affect their lives and the way they use the Internet. Most of them do not want a seat on the standards committees. They want to have choice in how they connect to the Internet and the power to use this powerful enabling technology in the ways that best suit their needs and conditions.

Import Ethanol, Not Oil

To paraphrase Mark Twain, people talk a lot of reducing U.S. dependence on imported oil, but they don’t do much about it. Rather than continuing to talk the talk, the United States has a unique window of opportunity to walk the walk. The $2-plus per gallon gasoline prices and our Middle East wars have made the public and Congress acutely aware of the politics of oil and its effects on our national security. With every additional gallon of gasoline and barrel of oil that the nation imports, the situation becomes worse.

Our analysis shows that the United States can have a gasoline substitute at an attractive price with little infrastructure investment and no change to our current fleet of cars and light trucks. By 2016, the United States could produce and import roughly 30 billion gallons of ethanol from corn, sugar cane, and grasses and trees, lowering gasoline use dramatically. Furthermore, the United States could encourage the European Union, Japan, and other rich nations to raise their ethanol production at home and in developing nations by a similar amount. Such increased production, together with improvements in vehicle fuel economy, would result in a notable decrease in petroleum demand, with positive implications for oil prices and Middle Eastern policy. This move would have the added benefit of supporting sustainable Third World development and reducing problems of global warming, because burning ethanol can result in no net carbon dioxide emissions into the atmosphere.

Committing to ethanol

The growing U.S. appetite for petroleum, together with demand growth in China, India, and the rest of the world, has pushed prices to new highs. The United States uses over 20 million barrels of petroleum per day, of which 58% is imported. Prices rose to almost $70 per barrel (bbl) in August 2005. The petroleum futures market is betting that the price will be $67 per bbl in December 2006 and remain well above $60 per bbl through 2012, presumably rising after that. Feeding our oil habit results in oil spills, air and water pollution, large quantities of emissions of greenhouse gases, and increased reliance on politically unstable regions of the world.

Although no one can predict the future with confidence, increasing worldwide petroleum demand will push prices higher over the next few decades. There is little public appetite for high gasoline taxes to decrease consumption or for forcing greater fuel economy on the U.S. light-duty fleet, but there is general recognition that we cannot continue to stick our heads in the sand.

Sensible policy requires that the United States both reduce the amount of energy used per vehicle-mile and substitute some other fuel or fuels for gasoline. The Bush administration plans to accomplish the latter, eventually, with hydrogen-powered vehicles. We are skeptical. The plans envisioned by even optimistic hydrogen proponents would, for decades to come, leave the nation paying ever-higher petroleum prices, continuing to damage the environment, and constraining foreign and defense policies to protect petroleum imports. Putting all our eggs in the hydrogen basket would require large investments and commit us to greater imports, higher prices, and greater dependence on the Persian Gulf until (and if) an attractive technology was developed and widely deployed.

A better alternative is for the nation to increase its use of ethanol as a fuel. In his 2006 State of the Union address, President Bush gave some support to ethanol, although he continued to place heavy emphasis on the promise of hydrogen. The president declared that the government would fund additional research in cutting-edge methods of producing ethanol from corn and cellulosic materials and vowed that his goal was to make ethanol “practical and competitive within six years.”

Unfortunately, Congress traditionally has viewed ethanol as a subsidy to corn growers rather than as a serious way to lower oil dependence. The Energy Policy Act of 2005 requires an increasing volume of renewable transportation fuel to be used each year, starting in 2006 and ultimately rising to 7.5 billion gallons of ethanol in 2012. Although this increase would raise the incomes of the corn producers and millers, it would not even keep up with the increases in the nation’s gasoline demand and so would not reduce crude oil imports. Gasoline use grows at a little more than 1% per year, about 1.4 billion gallons per year. By 2012, the United States would need to be using 13 billion gallons of ethanol merely to keep gasoline use constant. To reduce oil imports, the nation must achieve major increases in fuel economy and ethanol use.

SENSIBLE POLICY REQUIRES THAT THE UNITED STATES BOTH REDUCE THE AMOUNT OF ENERGY USED PER VEHICLE-MILE AND SUBSTITUTE SOME OTHER FUEL OR FUELS FOR GASOLINE.

The path to this goal starts today: The nation should start moving, as rapidly as ethanol supplies become available, to the widespread use of E20: a mixture of 20% ethanol and 80% gasoline. Every car built in the past three decades can use E10 and likely E20 without modification. For 2004, roughly 30 billion gallons of ethanol would have been needed to have the entire fuel stock be E20. Unfortunately, ethanol production and imports are only 13% of that amount today.

If the ethanol were available, the nation could substitute perhaps 80 billion gallons of ethanol for gasoline by 2016 by increasing the 4 million “flexible-fueled” vehicles that can use a mixture containing anywhere from 0 to 85% ethanol. If all new vehicles were flexible-fueled (for a cost of less than $200 per vehicle), the market for ethanol would grow by 8 billion gallons per year.

The primary barrier to producing and importing 30 to 80 billion gallons of ethanol in 2016 is the reluctance of the public and Congress to commit to an ethanol future. Thirty billion gallons of ethanol is more than the nation’s corn growers can provide. Cellulosic ethanol is an appealing approach to the problem, one that we have previously written about. Even with all of its potential, development has been painfully slow. The construction of the first commercially operating U.S. plant is 3 to 5 years away. Learning from that plant, designing a second generation and learning from that, and then building a commercial fleet of plants with U.S. technology will take a decade.

Looking south

But there is a promising shortcut that permits immediate access to substantial amounts of ethanol. The United States could address its oil use now, while the cellulosic ethanol industry develops.

We recently traveled to Brazil and saw a developing industry producing ethanol as a motor vehicle fuel. The Brazilians flock to this fuel because it is cheaper than gasoline. Current law requires that the gasoline sold be E25: 25% ethanol blended with 75% gasoline. Brazilians are lining up to buy newly developed flexible-fueled vehicles that can burn fuels ranging from E20 to E100 (actually, the hydrated ethanol contains 95% ethanol and 5% water). With such a flexible vehicle, a driver can buy whatever fuel is cheapest.

Brazil, together with some Caribbean nations, is exporting some 200 million gallons of ethanol to the United States annually. But the United States doesn’t make it easy. Brazil pays a 2.5% duty and doesn’t receive the 51 cent per gallon excise tax rebate that U.S. producers receive. The Caribbean nations are subject to a quota. Removing these trade barriers would make imported ethanol more attractive. Such a policy would not penalize U.S. farmer or producers, because total ethanol needs can accommodate all domestic production and imports. Still, the Bush administration remains opposed to eliminating or reducing the duty.

Even so, Brazil is expanding its domestic and export markets for ethanol. Currently Brazil has 370 sugar mills and distilleries, which are forecasted to produce over 4 billion gallons of ethanol this year. An additional 40 mills and distilleries are under construction, with the goal of essentially ending gasoline imports and exporting perhaps 15 billion gallons per year in a decade. According to some estimates, efficient Brazilian producers now make ethanol at a cost of roughly 72 cents per gallon. Our examination of the sugar cane harvesting and mills convinced us that Brazil could lower production costs substantially below that level.

In addition, the Brazilians are thinking seriously about even greater ethanol production from sugar cane and agricultural wastes. One university study is examining how Brazil could replace 10% of the world’s gasoline with ethanol (25 to 30 billion gallons) without clearing more rainforests and by doing less harm to the environment than current agriculture. Brazil is also making notable progress in producing ethanol from bagasse, the fibrous residual left after all the sugar is extracted from sugar cane. At least one pilot plant is making bagasse-derived ethanol, and there are plans for a full-scale plant.

The time is right for the United States to adopt policies aimed at expanding ethanol production and use. U.S. corn growers claim that they could possibly produce 15 billion gallons in a decade. Brazil seems ready and able to export another 15 billion gallons at $1 per gallon. At the same time, we should pursue technologies to produce ethanol from biomass at ever-lower costs. Some proponents claim that cellulosic ethanol could ultimately replace all gasoline use in the United States.

The technology for making ethanol from cellulose (grasses and trees) being developed in Brazil, the United States, and Canada, will enable many nations to grow energy crops to produce ethanol. This could be a significant cash crop for developing nations. Growing energy crops around the world has the potential for displacing perhaps half of the world’s gasoline demand. The result of cellulsoic ethanol development would be good for U.S. agriculture, by expanding available cash crops; for agricultural soils, by reducing fertilizer and pesticide use and increasing soil fertility; and for the ecology more generally, by providing habitat. The same would be true for farms in many nations, both rich and developing.

The key point is that U.S. actions to expand both domestic corn production and the importation of ethanol from Brazil would serve to develop the necessary infrastructure and incentives to bring cellulsoic ethanol to reality more rapidly. Thus, we see no downside risk to eliminating ethanol tariffs and promoting imports as the United States expands its own ethanol production. This strategy would complement policies to increase vehicle fuel economy. We see no losers—with the exception of OPEC—from this policy, and tremendous gain for the United States.

Environmental Safeguards for Open-Ocean Aquaculture

Because of continued human pressure on ocean fisheries and ecosystems, aquaculture has become one of the most promising avenues for increasing marine fish production. During the past decade, worldwide aquaculture production of salmon, shrimp, tuna, cod, and other marine species has grown by 10% annually; its value, by 7% annually. These rates will likely persist and even rise in the coming decades because of advances in aquaculture technology and an increasing demand for fish and shellfish. Although aquaculture has the potential to relieve pressure on ocean fisheries, it can also threaten marine ecosystems and wild fish populations through the introduction of exotic species and pathogens, effluent discharge, the use of wild fish to feed farmed fish, and habitat destruction. If the aquaculture industry does not shift to a sustainable path soon, the environmental damage produced by intensive crop and livestock production on land could be repeated in fish farming at sea.

In the United States, aquaculture growth for marine fish and shellfish has been below the world average, rising annually by 4% in volume and 1% in value. The main species farmed in the marine environment are Atlantic salmon, shrimp, oysters, and hard clams; together they account for about one-quarter of total U.S. aquaculture production. Freshwater species, such as catfish, account for the majority of U.S. aquaculture output.

The technology is in place for marine aquaculture development in the United States, but growth remains curtailed by the lack of unpolluted sites for shellfish production, competing uses of coastal waters, environmental concerns, and low market prices for some major commodities such as Atlantic salmon. Meanwhile, the demand for marine fish and shellfish continues to rise more rapidly than domestic production, adding to an increasing U.S. seafood deficit (now about $8 billion annually).

The U.S. Department of Commerce has articulated the need to reverse the seafood deficit, and under the leadership of its subagency, the National Oceanic and Atmospheric Administration (NOAA), has a stated goal of increasing the value of the U.S. aquaculture industry from about $1 billion per year currently to $5 billion by 2025. In order to achieve this goal, the Department of Commerce has set its sights on the federal waters of the Exclusive Economic Zone (EEZ), located between the 3-mile state zone and 200 miles offshore, where the potential for aquaculture development appears almost limitless. The United States has the largest EEZ in the world, amounting to 4.5 million square miles, or roughly 1.5 times the landmass of the lower 48 states. Opening federal waters to aquaculture development could result in substantial commercial benefits, but it also poses significant ecological risks to the ocean—a place many U.S. citizens consider to be the nation’s last frontier.

On June 8, 2005, Commerce Committee Co-Chairmen Sens. Ted Stevens (R-AK) and Daniel Inouye (D-HI) introduced the National Offshore Aquaculture Act of 2005 (S. 1195). The bill, crafted by NOAA, seeks to support offshore aquaculture development within the federal waters of the EEZ; to establish a permitting process that encourages private investment in aquaculture operations, demonstrations, and research; and to promote R&D in marine aquaculture science and technology and related social, economic, legal, and environmental management disciplines. It provides the secretary of Commerce with the authority and broad discretion to open federal waters to aquaculture development, in consultation with other relevant federal agencies but without firm environmental mandates apart from existing laws. The bill’s proponents argue that fish farming in the open ocean will relieve environmental stress near shore and protect wild fisheries by offering an alternative means of meeting the rising demand for seafood. However, because it lacks a clear legal standard for environmental and resource protection, the bill’s enactment would likely lead to a further decline in marine fisheries and ecosystems.

The introduction of S. 1195 came as no surprise to the community of environmental scientists and policy analysts who have followed the development of aquaculture in the United States. In 1980, Congress passed the National Aquaculture Act to promote aquaculture growth, and in the process established the Joint Subcommittee on Aquaculture, an interagency body whose task was to provide coordination and seek ways to reduce regulatory constraints on aquaculture development. Despite these actions, local concerns and associated regulatory burdens have limited the expansion of marine aquaculture within the 3-mile jurisdiction of many states, and regulatory uncertainty has discouraged investment in offshore production between the 3-mile state zone and the 200-mile EEZ. The Bush administration is now prepared to support efforts to streamline regulatory authority within the federal waters of the EEZ, promote open-ocean aquaculture, and make the United States a more competitive producer of marine-farmed fish.

Implementing S. 1195 would involve a two-tiered process: first, the creation of a law authorizing the leasing and permitting of open-ocean aquaculture facilities by the secretary of Commerce; and second, the start of rulemaking procedures within and among federal agencies. If passed, the bill would allow NOAA to issue site and operating permits within federal waters with 10-year leases, renewable for 5year periods. Decisions on permit applications would be granted within 120 days and would not require a lengthy inventory process to assess the state of marine resources at each site. The proposed legislation requires NOAA to “consider” environmental, resource, and other impacts of proposed offshore facilities before issuing permits; however, there is no requirement that NOAA actually identify and address those impacts before the permits and leases are granted. Similarly, the bill does not require that, during the permitting process, NOAA weigh the risks to the marine environment against the commercial benefits of aquaculture development.

The pro–fish-farming language of S. 1195, without commensurate language on the conservation of ocean resources and ecosystems, is extremely worrisome. It is unlikely that ocean resources will be protected in the face of aquaculture development unless the statute requires specific language on environmental mandates—not just “considerations”—for the rulemaking and permitting processes.

Open-ocean aquaculture encompasses a variety of species and infrastructure designs; in the United States, submersible cages are the model used for offshore finfish production. These cages are anchored to the ocean floor but can be moved within the water column; they are tethered to buoys that contain an equipment room and feeding mechanism; and they can be large enough to hold hundreds of thousands of fish in a single cage. Robotics are often used for cage maintenance, inspection, cleaning, and monitoring. Submersible cages have the advantage of avoiding rough water at the surface and reducing interference with navigation. A major disadvantage of offshore operations is that they tend to be expensive to install and operate. They require sturdier infrastructure than near-shore systems, they are more difficult to access, and the labor costs are typically higher than for coastal systems.

The economic requirements of open-ocean aquaculture suggest that firms are likely to target lucrative species for large-scale development or niche markets. In the United States, moi is produced commercially far from shore in Hawaii state waters, and experiments are being conducted with halibut, haddock, cod, flounder, amberjack, red drum, snapper, pompano, and cobia in other parts of the country. Tuna is another likely candidate for offshore development. Altogether, about 500 tons of fish are currently produced each year in submersible cages in the United States, primarily within a few miles of shore. The technology appears to have real promise, even though it is not yet economically viable for commercial use in most locations, and it is not yet deployed widely in federal waters far from shore.

Opening far-offshore waters to aquaculture could lead to substantial commercial benefits, but it also poses significant ecological risks to the ocean—a place many U.S. citizens consider to be our last frontier.

Some of the species now farmed in open-ocean cages, such as bluefin tuna, Atlantic cod, and Atlantic halibut, are becoming increasingly depleted in the wild. Proponents of offshore aquaculture often claim that the expansion of farming into federal waters far from shore will help protect or even revive wild populations. However, there are serious ecological risks associated with farming fish in marine waters that could make this claim untenable. The ecological effects of marine aquaculture have been well documented, particularly for near-shore systems, and are summarized in the 2005 volumes of the Annual Review of Environment and Resources, Frontiers in Ecology (February), and BioScience (May). They include the escape of farmed fish from ocean cages, which can have detrimental effects on wild fish populations through competition and interbreeding; the spread of parasites and diseases between wild and farmed fish; nutrient and chemical effluent discharge from farms, which pollutes the marine environment; and the use of wild pelagic fish for feeds, which can diminish or deplete the low end of the marine food web in certain locations.

Because offshore aquaculture is still largely in the experimental phase, its ecological effects have not been widely documented, yet the potential risks are clear. The most obvious ecological risk of offshore aquaculture results from its use of wild fish in feeds, because most of the species being raised in open-ocean systems are carnivorous. If offshore aquaculture continues to focus on the production of species that require substantial quantities of wild fish for feed—a likely scenario because many carnivorous fish command high market prices—the food web effects on ecosystems that are vastly separated in space could be significant.

In addition, although producers have an incentive to use escape-proof cages, escapes are nonetheless likely to occur as the offshore industry develops commercially. The risks of large-scale escapes are high if cages are located in areas, such as the Gulf of Mexico, that are prone to severe storms capable of destroying oil rigs and other sizeable marine structures. Even without storms, escapes frequently occur. In offshore fish cages in the Bahamas and Hawaii, sharks have torn open cages, letting many fish escape. In addition, farming certain species can lead to large-scale “escapes” from fertilization. For example, cod produce fertilized eggs in ocean enclosures, and although ocean cages are more secure than near-shore net pens, neither pens nor cages will contain fish eggs. The effects of such events on native species could be large, regardless of whether the farmed fish are within or outside of their native range. At least two of the candidate species in the Gulf of Mexico (red drum and red snapper), as well as cod in the North Atlantic, have distinct subpopulations. Escapes of these farmed fish could therefore lead to genetic dilution of wild populations, as wild and farmed fish interbreed.

The main problem with the proposed legislation is the broad discretion given to the secretary of Commerce to promote offshore aquaculture without clear legal standards for environmental protection.

Offshore aquaculture also poses a risk of pathogen and parasite transmission, although there is currently little evidence for disease problems in offshore cages. In general, however, large-scale intensive aquaculture provides opportunities for the emergence of an expanding array of diseases. It removes fish from their natural environment, exposes them to pathogens that they may not naturally encounter, imposes stresses that compromise their ability to resist infection, and provides ideal conditions for the rapid transmission of infectious agents. In addition, the production of high-valued fish often involves trade in live aquatic animals for bait, brood stock, milt, and other breeding and production purposes, which inevitably results in trans-boundary spread of disease. The implications of open-ocean farming for pathogen transmission between farmed and wild organisms thus remains a large and unanswered question. Moreover, pathogen transmission in the oceans is likely to shift in unpredictable ways in response to other human influences, particularly climate change.

Even the claim that open-ocean aquaculture provides “a dilution solution” to effluent discharge may be disputed as the scale of aquaculture operations expands to meet economic profitability criteria. The ability of offshore aquaculture to reduce nutrient pollution and benthic effects will depend on flushing rates and patterns, the depth of cage submersion, the scale and intensity of the farming operations, and the feed efficiency for species under cultivation. Scientific results from an experimental offshore system in New Hampshire indicate no sedimentation or other benthic effects, even when the cages are stocked with more than 30,000 fish. However, commercial farms will likely have 10 or more times this density in order to be economically viable; commercial salmon farms commonly stock 500,000 to a million fish at a site. It is not a stretch to imagine a pattern similar to that of the U.S. industrial livestock sector, with large animal operations concentrated near processing facilities and transportation infrastructure, and in states with more lenient environmental standards.

An essential question in the debate thus remains: What is the vision of the Department of Commerce in developing offshore aquaculture? If the vision is to expand offshore production to a scale sufficient to eliminate the $8 billion seafood deficit, the ecological risks will be extremely high.

In 2003 and 2004, the U.S. Commission on Ocean Policy and the Pew Oceans Commission completed their reports on the state of the oceans and suggested various policy reforms. Both reports acknowledged the rising role of aquaculture in world markets, described its effects on ocean ecosystems, and recommended NOAA as the lead federal agency to oversee marine aquaculture in the United States. The main difference between the reports is captured in the recommendations. Whereas the U.S. Commission recommended that the United States pursue offshore aquaculture, acknowledging the need for environmentally sustainable development, the Pew Commission recommended a moratorium on the establishment of new marine farms until comprehensive national environmental standards and policy are established. The drafting of S. 1195 clearly follows the U.S. Commission approach but uses even weaker environmental language, which allows for multiple interpretations and no clear mandate on marine resource and ecosystem protection.

The main problem with the proposed legislation is the broad discretion given to the secretary of Commerce to promote offshore aquaculture without clear legal standards for environmental protection. The authority is intended to facilitate a streamlining of regulations, yet it provides minimal checks and balances within the system. The bill states that the secretary “shall consult as appropriate with other federal agencies, the coastal states, and regional fishery councils . . . to identify the environmental requirements applicable to offshore aquaculture under existing laws and regulations.” An implicit assumption of the bill is that most of the needed environmental safeguards are already in place. Additional environmental regulations targeted specifically for offshore aquaculture are to be established in the future “as deemed necessary or prudent by the secretary” in consultation with other groups. Yet timing is everything. If the law is passed without the establishment of comprehensive national guidelines for the protection of marine species and the environment— and the requirement that these guidelines be implemented— such protection may never happen, or it may happen after irreversible damages have occurred.

Are current federal laws sufficient to protect the environment in the EEZ? The answer is no. As a framework, they leave major gaps in environmental protection. The Rivers and Harbors Act gives the Army Corps of Engineers the authority to issue permits for any obstruction in federal waters (including fish cages) but does not provide clear environmental mandates. The Corps has the broad discretion to ensure environmental quality but is not required to do so. The Outer Continental Shelf Lands Act extends this authority farther offshore beyond the territorial waters of the EEZ and applies to any offshore facilities that are anchored on or up to 1 mile from offshore oil rigs; in this case, further permit approval is required from the Department of Interior. The Clean Water Act gives the Environmental Protection Agency (EPA) the authority to regulate waste discharges from aquaculture facilities, but the agency’s recent effluent guidelines for aquaculture net pens, which presumably would be applied to offshore cages, focus simply on the use of best management practices. Aquaculture discharge is not currently regulated through the National Pollution Discharge Elimination System (NPDES), the permitting system used for municipal and industrial point-source discharge to U.S. waters. The Endangered Species Act and the Marine Mammal Protection Act both are applicable in the EEZ and can be used to limit offshore aquaculture operations if they are proven to threaten any listed threatened or endangered species, or if they unlawfully kill marine mammals. In addition, the Lacey Act gives the U.S. Fish and Wildlife Service the authority to regulate the introduction of exotic species in federal waters if they have been listed specifically as “injurious” to other species. The Lacey Act applies to any species that are transported or traded across borders, but not to species that already exist within borders. Finally, all international treaties and protocols would apply to offshore aquaculture in the EEZ.

The only federal law that the proposed bill would explicitly supersede is the Magnuson-Stevens Act (MSA) of 1976, which stipulates a balance between fishing and conservation. S. 1195 does not include any specific balancing requirements between ecosystems and industry. Regional fishery management councils established under the MSA as well as the public would be consulted in the process of environmental rulemaking but would not have a determining effect on the outcome.

Although S. 1195 supersedes only one federal law, existing legislation does not adequately address the major risks of farmed fish escapes and genetic dilution of wild stocks, pathogen transmission from farms to wild organisms, and cumulative effluent discharge. Most existing laws and regulations for marine aquaculture are found at the state level, where current near-shore systems operate. Few states have comprehensive regulatory plans for marine aquaculture, and there are no regional plans that address the risks of biological, chemical, or nutrient pollution that spreads from one coastal state to the next.

The proposed bill gives coastal states an important role in influencing the future development of offshore aquaculture. Indeed, coastal states would be permitted to opt out of offshore aquaculture activities. The bill states that offshore aquaculture permits will not be granted or will be terminated within 30 days if the secretary of Commerce receives written notice from the governor of a coastal state that the state does not wish to have the provisions of the act apply to its seaward portion of the EEZ. The governor can revoke the opt-out provision at any time, thus reinstating NOAA’s authority to issue permits and oversee aquaculture operations in that portion of the EEZ. Although the bill does not grant coastal states any jurisdiction over that part of the EEZ, it does provide them with potential exclusion from offshore aquaculture activities.

This amendment ensures a role for coastal states that is stronger than that which would apply through the Consistency Provision (section 307) of the Coastal Zone Management Act (CZMA). Section 307 of the CZMA requires that federally permitted projects be consistent with select state laws that safeguard coastal ecosystems, fisheries, and people dependent on those fisheries (collectively called the state’s “coastal zone management program”). To complete the permitting process for an offshore aquaculture project, the project applicant must certify the project’s consistency with the state’s coastal zone management program to NOAA. Even if the state objects to the applicant’s consistency certification, the secretary of Commerce can override the state’s objection and issue the permit simply by determining that the project is consistent with the objectives or purposes of the Federal Coastal Management Act or that the project is necessary in the interest of national security. Thus, the Department of Commerce retains ultimate authority over whether state laws apply to the EEZ.

Although the decision by different coastal states to opt out of the proposed offshore aquaculture bill is yet to be determined, some states have already adopted policies related to aquaculture development within state waters. In Alaska, state law prohibits finfish farming within the 3-mile state zone. In Washington, House Bill 1499 allows the Washington Department of Fish and Wildlife to have more control over environmental damages caused by near-shore salmon farming. In California, salmon farming and the use of genetically modified fish are prohibited by law in marine waters, and a new bill currently being reviewed in the state assembly (SB. 210) requires strict environmental standards for all other forms of marine aquaculture introduced into state waters. The California legislation, in particular, provides an excellent model for a redrafting of the National Offshore Aquaculture Act.

The need for national environmental standards

Whether environmentalists like it or not, marine aquaculture is here to stay and will inevitably expand into new environments as global population and incomes grow. Although the United States is in a position to make itself a global model for sustainable fish production in the open ocean, the proposed bill unfortunately falls far short of this vision. Pursuant to the recommendations of the Pew Commission, an aggressive marine aquaculture policy is needed at the national level to protect ocean resources and ecosystems. Within this policy framework, several specific features are needed:

  • The establishment of national environmental standards for siting and operation that minimize adverse effects on marine resources and ecosystems and that set clear limits on allowable ecological damage.
  • The establishment of national effluent guidelines through the EPA for biological, nutrient, and chemical pollution from coastal and offshore fish farms, using NPDES permits to minimize cumulative effluent impacts.
  • The establishment of substantive liability criteria for firms violating environmental standards, including liability for escaped fish and poorly controlled pathogen outbreaks.
  • The establishment of rules for identifying escaped farm fish by their source and prohibiting the use of genetically modified fish in ocean cages.
  • The establishment of a transparent process that provides meaningful public participation in decisions on leasing and permitting of offshore aquaculture facilities and by which marine aquaculture operations can be monitored and potentially closed if violations occur.
  • The establishment of royalty payments process for offshore aquaculture leases that would compensate society for the use of public federal waters.

At the same time, firms exceeding the minimum standards should be rewarded, for example, through tax breaks or reductions in royalty fees, in order to encourage environmental entrepreneurship and international leadership. By articulating a comprehensive set of environmental standards and incentives within the draft of the law, the bill would gain acceptance by a broad constituency interested in the sustainable use of ocean resources.

Proponents of offshore aquaculture might argue that these recommendations hold the industry to exceedingly high standards. Yes, the standards are high, but also essential. There is now a widespread realization that the ability of the oceans to supply fish, assimilate pollution, and maintain ecosystem integrity is constrained by the proliferation of human activities on land and at sea. Offshore aquaculture could help to alleviate these constraints, but only if it develops under clear and enforceable environmental mandates.

Delegitimizing Nuclear Weapons

The most urgent national security issue facing the United States is the possibility that a nuclear weapon might be used against this nation as an instrument of war or terror. If we are to avoid such a catastrophe and its unprecedented environmental, economic, and social effects, this threat must be addressed vigorously and soon.

Facing up to the threat will require more than tracking down terrorists or warning rogue states that they will be held accountable for their actions. It will require delegitimizing nuclear weapons as usable instruments of warfare and relegating them to a deterrent role or, in certain cases, to weapons of last resort. This policy change will be difficult to adopt, because the nation’s leaders as well as the general public have lost sight of the devastating power of nuclear weapons and tend to disregard the political and moral taboos surrounding their use.

A nuclear weapon has not been detonated in war since 1945. The 1962 Cuban missile crisis is ancient history for anyone under 50. There have been less than a handful of nuclear tests during the past decade, and the vast majority of nuclear tests between 1963, when the Limited Test Ban Treaty came into effect, and 1996 were conducted underground, literally and figuratively burying the “shock and awe” effects of a nuclear explosion. In the meantime, presidents and politicians have come to view nuclear weapons as a seamless extension of the nation’s military capabilities and the threat of their potential use as an acceptable part of its political rhetoric.

This nuclear amnesia is critically dangerous for several reasons. First, nuclear weapons are enormously more destructive than conventional explosives. During 10 months of air raids on Britain in 1940–1941, the German Luftwaffe dropped bombs with the equivalent of 18.8 kilotons and killed more than 43,000 people. At Hiroshima, one bomb with an estimated yield of 15 kilotons killed 70,000 in one day, with the toll reaching 140,000 by the end of 1945 because of subsequent deaths from injuries and radiation exposure.

A dangerous amnesia about the devastating nature of nuclear weapons has set in among the nation’s leaders and citizens.

Second, despite efforts by the Clinton and Bush administrations to equate the dangers of chemical, biological, and nuclear weapons by lumping them together as weapons of mass destruction, nuclear weapons are the only ones that could devastate the United States, irreparably altering the lives its citizens. Chemical weapons (CWs) tend to be localized in their effects and difficult to deliver over large areas. They can be detected by sensors and their effects mitigated by protective measures. Biological weapons (BWs) are a more serious threat, but they can be tricky to produce, difficult to disseminate, and unpredictable in their effects. Against unprepared civilians, BWs could be devastating, although the severity of an attack could be attenuated by vaccinations, masks, antidotes, protective clothing, quarantines, and small-scale evacuations. On the other hand, there might be no discernable sign of the launch of a BW attack, in which case those responsible might be impossible to identify.

The devastating efforts of nuclear weapons as compared with CWs and BWs are indicated in a comparative lethality risk model developed by the now-defunct congressional Office of Technology Assessment (OTA). The release of 300 kilograms of sarin nerve gas would create a .22-square-kilometer lethal area and cause 60 to 200 deaths. The release of 30 kilograms of anthrax spores would create a 10-square-kilometer lethal area and cause 30,000 to 100,000 deaths. But the explosion of a hydrogen bomb with a 1-megaton yield would create a 190-square-kilometer lethal area and cause 570,000 to 1,900,000 deaths.

Third, the public is generally unaware of the large numbers of nuclear weapons around the world. About 27,000 are believed to exist in eight known and one suspected (North Korea) countries. Most of these weapons (26,000) are in U.S. or Russian arsenals. Weapons that are deployed and ready to be used on short notice generally are secure from theft or diversion. But security problems, particularly in Russia, continue to exist with those weapons that are kept in storage or reserve.

The 2002 U.S.-Russian Strategic Offensive Reduction Treaty (SORT, also referred to as the Moscow Treaty) will reduce the long-range strategic nuclear weapons of the two parties to 1,700 to 2,200 deployed warheads each by 2012. The treaty, however, does not apply to strategic nuclear weapons in storage or reserve, or to any tactical nuclear weapons, which together constitute the overwhelming majority of warheads in the arsenals. The treaty also does not affect the 2,500 to 3,000 warheads that the United States and Russia each still maintain ready to be launched on short notice. The other nuclear powers generally keep their systems in a lower state of readiness, often without the warheads mated to missiles or aircraft.

Finally, unlike the CWs and BWs, there are no effective defenses against a nuclear weapon delivered by long-range missile or one clandestinely emplaced in a target country. Thus, although there is a high level of confidence by the command authorities of the nuclear weapon states in the reliability of offensive warheads going off over the target if launched by missiles, there is no comparable confidence in the reliability of defenses that are deployed or being developed against ballistic or cruise missile attack. Strategic missile defenses currently being developed are unproven against a determined small-scale attack, unworkable against a large-scale attack, and irrelevant to the threat from rogue states or terrorists, whose delivery systems are unknown but not likely to be long-range ballistic missiles.

Fourth, although CWs and BWs are banned by international treaty, nuclear weapons are not. For reasons perhaps related to the adversarial, deterrent relationship between the United States and the Soviet Union during the Cold War, nuclear weapons have been more leniently treated than CWs and BWs. Specifically, the possession, use, and transfer of CWs and BWs are outlawed by international agreement. Notwithstanding the fact that some countries have acquired and used these weapons, the international community has established an explicit norm against their use, and the relevant CW and BW agreements call for an international response to violations of this norm to be orchestrated through the United Nations (UN) Security Council.

Although there is no international convention forbidding the use of nuclear weapons in warfare, implicit political and moral constraints against their use seem to be recognized by most states. (These would, of course, not restrain non-state actors.). In addition, in the absence of a universal ban, some large geographic areas of the world have declared nuclear weapons off limits. These so-called nuclear weapon–free zones (NWFZs) include Latin America, Africa, the South Pacific, Southeast Asia, and Antarctica.

The major international treaty regarding nuclear weapons— the Nuclear Non-Proliferation Treaty (NPT)—bans only the proliferation, not the use, of nuclear weapons beyond the United States, United Kingdom, France, China, and Russia, who also happened to be the five permanent members (P-5) of the UN Security Council. However, the treaty grants the non-nuclear weapon states the right to the “peaceful” use of nuclear technology. This essentially permits any state to develop the capability to produce the fuel required for a nuclear power plant. Unfortunately, this fuel could also be used to build a nuclear weapon. This is the basis for the current concern about the Iranian program to enrich uranium.

To counterbalance the continued possession of nuclear weapons by the P-5 nations, the NPT calls for these states to work toward ending the arms race and for all NPT members to seek general and complete disarmament. No timetable and no political or security criteria for disarmament were established and, not surprisingly, no nuclear nation has committed to a date for its own denuclearization (although the debate has some resonance in the United Kingdom). But the nuclear disarmament goal is nonetheless an explicit one and is frequently singled out by the non-nuclear weapon states as an “unequivocal obligation” that the nuclear powers have yet to fulfill.

In addition, there is no explicit ban on the further development or modernization of nuclear weapons by the nuclear weapon states. The Comprehensive Nuclear Test Ban Treaty (CTBT), which would have essentially halted the development of more sophisticated weapons, was rejected by the U.S. Senate in 1999, and its entry into force any time soon, if ever, looks highly improbable.

A final reason for why nuclear amnesia is dangerous is because, with the exception of China, the nuclear weapon states continue to maintain the right of first use of nuclear weapons against any kind of attack, as well as the right of preventive or preemptive attack. As President Jacques Chirac of France stated in January 2006, “The leaders of states who would use terrorist means against us, as well as those who would consider using in one way or another weapons of mass destruction, must understand that they would lay themselves open to a firm and adapted response on our part. This response could be a conventional one. It could also be of a different kind.”

All nuclear first-use policies are in sharp conflict with the findings of the International Court of Justice (ICJ). In 1996, the ICJ concluded that “the threat or use of nuclear weapons would generally be contrary to the rules of international law applicable in armed conflict, and in particular the principles and rules of humanitarian law.” The ICJ, however, could not agree on whether nuclear weapons could be used “in an extreme circumstance of self-defense, in which the very survival of a State would be at stake.” First-use policies also contravene the so-called negative security assurances, a solemn political commitment by the P-5 not to carry out a nuclear attack against non-nuclear weapon states that are NPT members.

Concern about the Bush administration’s current nuclear weapon use policies has evoked a strong reaction from some members of Congress. In a December 2005 letter to President Bush, 16 lawmakers objected to the March 15, 2005, draft of the Pentagon’s Doctrine for Nuclear Operations that would allow combat commanders to request presidential approval for the preemptive use of nuclear weapons under various conditions. “We believe this effort to broaden the range of scenarios in which nuclear weapons might be contemplated is unwise and provocative,” the letter said.

By supporting a variety of justifications for nuclear use, the administration is sending a clear message that nuclear weapons are indispensable, legitimate war-fighting weapons required by the world’s most powerful country to ensure its security. Moreover, the current policy also indicates that the United States, despite its rhetorical support for the NPT and its efforts to convince non-nuclear weapon states to renounce nuclear weapons and fissile material production, does not intend to eliminate these weapons from its own arsenal and, indeed, plans to modernize and retain the arsenal indefinitely.

Delegitimizing nuclear weapons

There are no indications that the Bush administration, in its three remaining years in office, will reexamine its ill-considered and self-endangering policy of threatening to use nuclear weapons in practically every contingency or abandoning the push to develop new specialized nuclear weapons to support this policy. If the United States is to avoid the unmitigated disaster surrounding any nuclear weapons use, it will be up to the next administration to remove nuclear weapons from the quiver of threat responses and war-fighting scenarios and begin the process of delegitimizing nuclear weapons.

To this end, three actions can be taken. The next administration should declare that the United States does not consider nuclear weapons to be a legitimate weapon of war and will not use them unless they are used by an adversary. This statement does not require congressional approval or presage costly military acquisitions. It might also be coordinated with the other nuclear powers. As the head of the Department of Energy’s National Nuclear Security Administration, Linton Brooks, noted recently, “We can change our declaratory policy in a day.”

The current U.S. nuclear use policy is unwise in that it lacks any strategic rationale. The threat during the Cold War to use nuclear weapons in response to non-nuclear aggression, however contradictory and self-deterring such a policy might have been, was considered helpful in reassuring the Western alliance that some military response was available to counter the conventional military quantitative advantages of the Warsaw Pact. Today, however, the United States enjoys the greatest conventional superiority in history over any potential enemy or combination of enemies and, with the exception of nuclear weapons, cannot be put at risk by any adversary.

In 1993, three respected members of the U.S. national security establishment, McGeorge Bundy, William J. Crowe, and Sidney Drell, wrote: “There is no vital interest of the U.S., except the deterrence of nuclear attack, that cannot be met by prudent conventional readiness. There is no visible case where the U.S. could be forced to choose between defeat and the first use of nuclear weapons.” Nothing has occurred since this statement was written to make nuclear weapons more critical to maintaining stability and security. To the contrary, for the United States to insist that it needs the threat of the use of nuclear weapons to deter potential state and non-state adversaries raises the question of why other, much weaker nations, confronted by hostile neighbors, do not need them as well (or even more). Moreover, a U.S. first-use policy reinforces the value and prestige attributed to nuclear weapons and undermines efforts by the United States to persuade other nations to refrain from developing their own nuclear arsenals.

Current U.S. nuclear use policy is also unwise in that it lacks any political rationale. As the series of its post–Cold War interventions has demonstrated, the United States is prepared to undertake military missions for a number of reasons: to promote democracy (Haiti), resolve civil conflicts (Somalia), protect allies (Kuwait), promote regime change (Iraq), pursue terrorists (Afghanistan), and protect human rights (Kosovo). At the same time, the United States has made it clear that it seeks to perform these mainly humanitarian missions with a minimum amount of harm to innocent civilians and the target country.

In none of these interventions would nuclear weapons have been an appropriate or necessary means to a political end. It is not possible to reconcile the use of a nuclear weapon with the pursuit of democratic and humanitarian goals. But as long as the United States refuses to rule out the potential use of nuclear weapons in virtually any contingency, it is difficult to avoid creating the impression that the spread of democratic values is being backed by a nuclear threat. To many countries, this policy seems both deceitful and dangerous and suggests that the only way to meet the U.S. challenge is to possess nuclear weapons of their own.

Some proponents of the current nuclear use policy argue that the United States will probably never employ nuclear weapons except in retaliation for an actual nuclear attack or to prevent an imminent one. Certainly, memoirs by senior policymakers during the First Gulf War make it clear that whatever was implied, the United States never had, under any circumstances, the intention of using nuclear weapons during the war. Nonetheless, the proponents claim that the uncertainty or “calculated ambiguity” as to the nature of the U.S. response to a high-profile security challenge still serves to deter a potential aggressor from initiating a CW or BW attack. If indeed the United States continues to maintain that all options are on the table but does not actually intend to use nuclear weapons in the situations envisaged by the Pentagon’s draft Doctrine for Nuclear Operations cited above, then “calculated ambiguity” as a policy loses its credibility and the United States is saddled with a doctrine that provokes hostility rather than promoting security.

The next administration could also make it clear that the United States does not intend to resume nuclear testing in order to develop new nuclear weapons. There will be a new Congress in 2009 that, if the new administration is so committed, might be persuaded to reconsider the Senate’s 1999 rejection of the CTBT. If China joined with the United States and the three other members of the P-5 that have already ratified the treaty, it would make it considerably more difficult (but not impossible) for the major nuclear powers to begin nuclear testing again.

Ratifying the CTBT would not, however, immediately solve the challenges involving India, Pakistan, North Korea, or Israel, which currently do not seem to have the political incentive to sign and ratify the agreement or to break the moratorium on testing. It would, however, delegitimize nuclear testing, curb substantial arsenal modernization by the P-5, and reinforce U.S. credibility in its efforts to convince other nations of the need to stem proliferation. If the next administration cannot muster enough senatorial support to see the CTBT through to ratification, it should publicly recommit to the self-imposed testing moratoria—as the current administration has done after a fashion—that has been in place for all of the P-5 since 1996. (Russia has not tested a nuclear weapon since 1990; the United States and the United Kingdom have not tested one since 1992.)

The continued testing moratorium should be combined with a disavowal of efforts to develop new warheads to carry out nuclear use policies. The administration argued in its 2001 Nuclear Posture Review that “new capabilities must be developed to defeat emerging threats such as hard and deeply buried targets, to find and attack mobile and relocatable targets, to defeat chemical or biological agents, and to improve accuracy and limit collateral damage.” The administration has been seeking funds to explore three new nuclear weapons: a “mini-nuke” (purportedly to reduce collateral damage); a “bunker-buster” (an earth-penetrating bomb intended to destroy underground facilities); and a “reliable replacement warhead (RRW)” to increase the longevity, reliability, and safety of the stockpile.

A U.S. first-use policy reinforces the value and prestige attributed to nuclear weapons and totally undermines our efforts to persuade other nations to refrain from developing their own nuclear arsenals.

Although one or another of these devices might be developed without proof testing (the bunker-buster, for example, is more a question of enhancing the casing than changing the physics package, and there are existing low-yield warhead designs available for a mini-nuke), those who champion these new weapons are likely to use the uncertain performance of these new systems, most egregiously the RRW, as a compelling reason to abandon the testing moratorium and resume nuclear tests. (One way to ensure that the RRW will not need proof testing, as Raymond Jeanloz, chair of the National Academy of Sciences Committee on International Security and Arms Control, pointed out at the 2005 Arms Control Association annual meeting, is to keep it within “design parameters that have a test pedigree.” This is certainly a possible design constraint, but the pressures to adopt more advanced designs and to test any resulting weapon will be extremely strong.)

The nuclear weapons that the administration is seeking are not ideal or even necessary for carrying out these missions. In the first instance, finding hard and deeply buried targets of high value is a strenuous and uncertain intelligence task. If such sites are correctly identified (a big “if”), many of them could be destroyed or disabled or access to them denied by conventional precision-guided munitions. On the other hand, if they are misidentified and a nuclear weapon destroys a nonmilitary industrial site and the neighborhood surrounding it, the United States would be subject to international outrage of the sort that has shrouded the invasion of Iraq.

The Bush administration’s broadening of the range of scenarios in which nuclear weapons might be used is unwise and provocative.

In addition, any potential adversary would seek to put its important command and control or other military assets deeply enough underground or within mountains or inside tunnels to make them safe from such attack. In that case, the hardened targets would either be unreachable or require weapons with such high yields that they would unfailingly inflict significant collateral damage (a mission already within the capabilities of weapons in the existing arsenal). Alternatively, adversaries might embed their high-value targets in civilian neighborhoods, inviting the United States to face widespread condemnation if these targets were attacked with nuclear weapons.

The same paradox surrounds attacks against BW or CW agents. The deeper the bunker and the larger the yield required to destroy it, the greater the collateral damage. Moreover, if the attack fails to neutralize the chemical and biological agents by thermal effects or radiation, then the agents themselves may be dispersed and compound the lethality of the attack.

The administration also argues that the current U.S. nuclear arsenal is self-deterring because a rogue state leader could doubt that the United States would employ large-yield warheads against an adversary. Low-yield mini-nukes, the administration claims, would be a much more credible deterrent or response. There are serious drawbacks to this argument. One is that making nuclear weapons more usable, particularly when they are not militarily required, ultimately endangers U.S. security by breaking down the barriers to the use of any nuclear device. Second, the idea that a mini-nuke will reduce collateral damage is truly nuclear “newspeak,” given the destructive power of even a small-yield weapon. (A 12.5-kiloton weapon could cause 20,000 to 80,000 deaths. The severe blast damage radius of a 5-kiloton weapon would be more than 0.6 kilometer.) Finally, the call for usable mini-nukes implies that the current force of 6,000 deployed nuclear warheads, including some weapons with very small yields, is neither a valid deterrent nor a credible retaliatory threat.

Weapons of last resort

As another important aspect of the delegitimization process, the United States, rather than preserving and heralding the right of first use, should urge the international community to ban the use of nuclear weapons except in retaliation for nuclear use by others or, particularly in the case of small states such as Israel, as a last resort if the survival of the nation is at risk. (Eliminating the possession of nuclear weapons, the ultimate ideal outcome, will be obtained incrementally, if at all, after transparency and confidence are established and specific regional security concerns are removed.) The NATO alliance came close to this policy formulation in its London Communiqué of 1990, when it sought to reassure Russia by deeming nuclear forces “truly weapons of last resort” and again in the 1999 Strategic Concept when it noted that “the circumstances in which any use of nuclear weapons might have to be contemplated …are therefore extremely remote.”

The European allies of the United States can be helpful in this regard. They need to abandon their attachment to European-based U.S. tactical nuclear weapons: the 200 to 400 bombs deployed in Belgium, Germany, Italy, the Netherlands, Turkey, and the United Kingdom, which constitute the last remnants of the Cold War flexible response policy. In the early years of the Clinton administration, the Pentagon concluded that there was no longer any military requirement for these weapons in Europe. The allies, however, were loath to break the nuclear umbilical cord at that time, and the weapons remain as a symbol in the European mind of U.S. commitment to continental security. If the Europeans can wean themselves of this perverse sign of solidarity, which might have been made easier by erratic and bellicose U.S. behavior in this decade, a half-dozen NATO allies might finally be cleared of nuclear weaponry. In turn, this move might encourage Russia to reciprocate by constraining its tactical nuclear weapons stockpile.

A declaration of non-use would be difficult, but perhaps not impossible, to negotiate. The nuclear weapon states have already pledged not to attack non-nuclear states with nuclear weapons (the “negative security assurances” noted above) as follows: “The United States reaffirms that it will not use nuclear weapons against [non-nuclear-weapon] states parties to the Treaty on the Non-Proliferation of Nuclear Weapons except in case of an invasion or any other attack on the United States, its territories, its armed forces or other troops, its allies, or on a state toward which it has a security commitment, [carried out or sustained] by [such a non-nuclear-weapon state in association or alliance with] a nuclear-weapon [state].”

This declaration has been acceptable to the P-5 for some years; the U.S. version of the statement was first enunciated in 1978 under President Carter. According to a 2004 poll conducted by Stephen Kull, director of the University of Maryland’s Center on Policy Attitudes, 57% of the respondents believed that the United States should “reconfirm” this commitment “so as to discourage countries from trying to acquire or build nuclear weapons.” The existing declaration could easily be rewritten (as notionally indicated by the parentheses above) to make nuclear weapons use justified only in response to nuclear attack. Of course, drafting such a statement is much easier than marshalling the political forces to endorse it, but if the United States takes the lead in seeking to delegitimize the use of nuclear weapons under any but the most extenuating circumstances, it may be possible to rally the other P-5 members to the declaration.

As a final aspect of delegitimization, the next U.S. administration should encourage the creation of NWFZs, the goal of which would be to make increasing areas of the globe off limits to nuclear weapons. Although the NPT is a nearly universal agreement, it is also an agreement with 187 very diverse members stretched over vast geographical and cultural distances and whose ultimate arbiter is the UN. Regional NWFZs are smaller units and, in theory at least, deal with the national security concerns of a “neighborhood” of member states. The treaty-based NWFZs that already exist could provide model frameworks for the negotiation of new ones. (Nonsovereign territories such as Antarctica, outer space, and the seabed are already off limits to nuclear weapons.)

Thus far, however, the United States has resisted going along fully with new zones being created. The United States signed the protocol to the African NWFZ Treaty, but with a reservation allowing the use of nuclear weapons against states in the NWFZ that use CWs or BWs. The United States, along with other nuclear weapon states, also has not signed the relevant protocol to the Southeast Asia NWFZ Treaty, claiming that it conflicts with the right of passage; that is, with the transport of nuclear cargoes through international waters and airspace. (Because the United States no longer has nuclear weapons on surface ships, this objection could be reconsidered.) Rather than taking exception to these zones, the United States should welcome them as reinforcing its own security goals and seek to strengthen efforts elsewhere in the world to rule out the presence of nuclear weapons.

Nuclear weapons are a clear and present danger to the United States. Because the United States is at present unwilling to negotiate treaties or enter into binding agreements, the burden of securing our future will fall on the next president. If his (or her) administration hopes to enhance U.S. security against the most serious threats, it will have to do more than pursue terrorists or enforce nonproliferation. It will also have to reduce the attractiveness of nuclear weapons to itself and to the rest of the world. This will entail adopting policies that delegitimize nuclear weapons by reducing the incentives to acquire them and by relegating them to a deterrent, retaliatory role or to weapons of last resort. If we fail to wean ourselves from the idea that the threat or use of nuclear weapons can ensure our security, then we are likely to find that the cure for nuclear amnesia involves a nasty shock and an acrid smell.

Federal Neglect: Regulation of Genetic Testing

Government needs to ensure that genetic tests provide useful medical information and that the test results are reliable.

U.S. consumers generally take for granted that the government assesses the safety and effectiveness of drugs and other medical products before they are made available commercially. But for genetic tests, this generally is not the case. At the same time, the number and type of genetic tests continue to increase, and tests for more than 900 genetic diseases are now available clinically. Genetic testing is playing a growing role in health care delivery and is providing information that can be the basis for profound life decisions, such as whether to undergo prophylactic mastectomy, terminate a pregnancy, or take a particular drug or dosage of a drug. Current gaps in the oversight of genetic tests, and of the laboratories that offer them, thus represent a real threat to public health.

Currently, the government exercises only limited oversight of the analytic validity of genetic tests (whether they accurately identify a particular mutation) and virtually no oversight of the clinical validity of genetic tests (whether they provide information relevant to health and disease in a patient). To the extent that oversight exists, it is distributed among several agencies, with little interagency coordination. As a result, no clear regulatory mechanism exists to guide the transition of tests from research to clinical practice, or to ensure that tests offered to patients are analytically or clinically valid. In order to protect consumers, and to help advance the potential benefits offered by genetic testing, government action is urgently needed.

Lingering problems

Most genetic tests are not sold as stand-alone products but as services by clinical laboratories. Clinical laboratories are regulated under the Clinical Laboratory Improvement Act (CLIA), as amended in 1988. CLIA was enacted to strengthen federal oversight of clinical laboratories and to ensure accurate and reliable test results after Congress found widespread poor quality of laboratory services.

CLIA, which is administered by the Centers for Medicare & Medicaid Services (CMS), imposes basic requirements that address personnel qualifications, quality-control standards, and documentation and validation of tests and procedures. For most “high-complexity” tests, meaning those that require a high degree of skill to perform or interpret, CLIA requires periodic “proficiency testing,” in which the laboratory must demonstrate its ability to accurately perform the test and interpret the results. Genetic tests are high-complexity tests, but CMS has not created a genetic testing “specialty” for molecular and biological tests, and therefore specific proficiency testing for these genetics tests is not mandated under CLIA. This means that laboratories must determine their proficiency for themselves. Some labs do so by using proficiency-testing programs established by professional organizations; however, the use of these programs is not required under CLIA, and these organizations provide proficiency-testing programs for only a small subset of genetic tests.

As early as 1995, the National Institutes of Health (NIH) and the Department of Energy jointly convened a government task force to review genetic testing in the United States and make recommendations to ensure the development of safe and effective genetic tests. Since that time, government advisory bodies have urged CMS to strengthen CLIA oversight for genetic tests by, among other things, establishing a specialty area for genetic testing. However, although the government announced in 2000 that it would establish a genetics specialty area, no standards have yet been issued.

Test kits and home brews

A genetic test can be performed using either a “test kit” or a “home brew.” Test kits, as their name implies, contain the reagents needed to perform the test, instructions on test performance, and information regarding what mutations are detected. Kit manufacturers sell these tests to laboratories, which use them to perform the tests. “Home brews” are assembled in house by the laboratory and are used by the laboratory to analyze patient samples and provide results to health care providers and patients.

Laboratories that use home-brew tests currently are subject to only minimal CLIA oversight. CLIA does not explicitly authorize CMS to evaluate how accurate home-brew tests have to be in predicting a particular clinical outcome (clinical validity) or the likelihood that the use of a test will lead to an improved health outcome (clinical utility). Moreover, CLIA does not permit CMS to be a “gatekeeper” for home-brew tests, in that it authorizes neither prospective review nor pre- or postmarket approval of new tests by CMS. The decision to offer a new genetic test is within the sole discretion of each clinical laboratory director. Nor can CMS restrict when and for whom a test may be performed, meaning that it is up to the provider to determine whether a particular test is appropriate for a particular patient, without the help of specified indications for use (such as those provided for drugs and medical devices).

The Food and Drug Administration (FDA) customarily regulates most medical products, but its jurisdiction over home-brew tests is unclear, and the agency at various times has taken different positions on the issue. Recently, the agency has stated publicly that it lacks the statutory authority to regulate home-brew tests.

Test kits, however, are regulated by the FDA as medical devices. Before they can be marketed, the manufacturer must submit data to the FDA demonstrating that the test accurately identifies a mutation of interest and that the mutation correlates with present or future health status. However, of the more than 900 diseases for which genetic tests are currently available clinically, the FDA has approved only four test kits to detect mutations in human DNA: for factor II and factor V Leiden, which affect blood clotting; cytochrome P450 genotyping, which affects the rate at which drugs are metabolized and thus can help in determining dosage; and cystic fibrosis. The manufacturer or laboratory, and not the FDA, makes the decision whether to develop a particular genetic test as a test kit or a home brew and, therefore, whether submission to FDA is required. The tiny number of FDA-approved test kits makes it clear that manufacturers prefer the less-regulated status and that the regulatory regime allows them to avoid stringent FDA oversight.

The FDA also regulates as medical devices certain components, known as “analyte-specific reagents” (ASRs), of home-brew tests. ASRs are small molecules that serve as the active ingredients of home-brew tests, and they can be manufactured for sale or made in house by the laboratory. The FDA’s oversight of ASRs is fairly narrow; ASRs that are manufactured must be sold only to laboratories certified to perform high-complexity tests and must be labeled in accordance with FDA requirements. Also, FDA regulations state that home-brew tests that are developed using commercially distributed ASRs must be ordered by a health professional or “other persons authorized by state law.” The FDA interprets this regulation to require that an ASR-based home-brew test be ordered only by a health care provider, but the agency does not appear to have ever enforced this provision. Additionally, the regulation does not distinguish between a patient’s personal physician and a physician-employee of the testing laboratory. Nor does the FDA regulate the claims that laboratories make about tests developed using ASRs.

In the absence of a coherent system of oversight, it is difficult for providers or patients to have confidence in the claims made by those selling genetic tests or in the competence of the laboratories performing them. The absence of a regulatory system that requires a premarket demonstration of validity, moreover, has created an environment ripe for entry into the marketplace of tests of unproven medical value that are targeted directly to consumers.

Targeting consumers

The phrase “direct to consumer” is best known in the context of pharmaceutical advertising, where it is used to refer to advertisements that inform patients of the availability of a particular medication to treat a specific condition, such as depression or erectile dysfunction, and encourage them to ask their doctor about the drug. These ads have generated controversy, with some observers arguing that the ads induce demand inappropriately and fail to inform patients adequately regarding the risks of the drugs being promoted. Nevertheless, for prescription drugs, these ads can increase demand only indirectly: The physician serves as a gatekeeper, ensuring that only those medications appropriate for a patient are prescribed. Additionally, the safety and effectiveness of the drugs have already been assessed by the FDA.

Direct-to-consumer (DTC) genetic testing, in contrast, encompasses three different scenarios: the advertising of a genetic test that is available only upon a health care provider’s order; the advertising and sale of genetic testing directly to consumers, without the involvement of any health care provider; and the advertising and sale of testing services directly to consumers, with some involvement by a health care provider employed by the tester (for example, the laboratory). Today, several genetic tests are being advertised and sold directly to the public, both through Internet Web sites and retail stores.

Most laboratories do not currently offer genetic testing directly to the public. In fact, only about eight companies promote DTC testing through Internet Web sites for health-related conditions (excluding, for example, genetic tests such as those for paternity and ancestry). However, the growth of DTC testing is likely to continue, given the low barrier to market entry, particularly via the Internet; the rapid pace of genetic research; and the interest of consumers in self-care.

Tests offered over the Internet include some that are conducted routinely as part of clinical practice, such as tests for mutations causing cystic fibrosis, hemochromatosis, and fragile X (an abnormality of the X chromosome leading to mental impairment and other conditions). For these types of tests, the most readily apparent differences between DTC testing and provider-based testing are who collects the sample, to whom test results are communicated, and who interprets test results. Some laboratories require a patient to provide the name of a physician and will send results only to that provider, whereas other laboratories send results directly to patients and do not request the name of a provider. Some laboratories have genetic counselors on staff to take medical and family history information and be available for questions about test results; others do not.

Internet-based DTC testing also includes another category of tests: those for conditions lacking adequate evidence of predictive value for a disease or condition in the scientific literature. Examples in these categories include “genetic profiling” to guide the selection of nutritional supplements, testing to determine propensity to depression, and testing to select an appropriate skin care regimen (also sold by the testing company). One company advertises its tests for obesity and osteoporosis susceptibility and for “oxidative stress” to the nutraceutical, personal and skin care, and weight-loss industries, which, presumably, would offer them directly to consumers.

DTC tests also now include so-called “pharmacogenetic” tests: those used to determine whether a particular medication or dosage of medication is therapeutically appropriate. Although pharmacogenetics holds the promise of improved drug efficacy and reduced adverse reactions, the endeavor is predicated on the availability of accurate and reliable genetic tests. The current lack of coherent oversight threatens to derail this promising new field. Manufacturers and laboratories can simply claim that the tests are home brews in order to avoid rigorous FDA review of their quality.

Avoiding harm

The initial criticism of DTC genetic testing highlighted harms from both advertising of tests and access to tests in the absence of a health care provider intermediary. The underlying theme of these criticisms has been that consumers are vulnerable to being misled by advertisements and lack the requisite knowledge to make appropriate decisions about whether to get tested or how to interpret test results. It has been argued that consumer-directed advertisements underemphasize the uncertainty of genetic testing results, and overemphasize testing’s benefits to a public that is not sophisticated enough to understand genetics. Critics argue that genetic test results are complicated because they may provide only a probability of disease occurring, and that a health care provider is needed to put the test result in context and explain its subtleties. Further, it is asserted that ads may exaggerate the risk and severity of a disease for which testing is available. Thus, DTC advertising and unmediated access will have the negative effects of increasing consumer anxiety and generating demand for unnecessary testing.

In order to avoid the harms of DTC genetic testing, some observers have proposed restricting access to tests or advertising of tests. Regulating access would involve limiting those authorized to order the tests and receive the results. Regulating advertising would involve limiting the claims that test providers could make about their tests and, potentially, limiting the media through which claims could be made.

Regulating access. Whether health care provider authorization is required in order to obtain a genetic test, or any laboratory test, is the province of state law. Some states explicitly authorize patients to order specified laboratory tests (such as cholesterol or pregnancy tests) without a prescription from a health care provider. Other states categorically prohibit all DTC testing. And still other states are silent on the issue, meaning that individual laboratories decide whether to offer DTC testing. As of 2001, more than half of the states permitted DTC testing for at least some types of tests, whereas 18 prohibited it. Even where a provider’s order is required, it may not be the case that the patient’s interest is the provider’s only interest; sometimes a physician employed by the laboratory is empowered to authorize testing on behalf of a patient.

Federal or state law could prohibit direct patient access to genetic tests by requiring a health care provider to order the test and receive the results. However, relying on state law would probably lead to a patchwork of non-uniform requirements; and Internet-based genetic testing, which may operate outside the reach of any one state, may make enforcement of such laws more difficult. In addition, federal or state restrictions on access would be predicated on the assumption that health care providers, unlike patients, are adequately prepared to appropriately order and interpret tests, but studies have shown that providers often have inadequate knowledge and training to provide quality genetic services.

Regulating advertising. Federal law protects consumers against unfair, deceptive, or fraudulent trade practices, including false or misleading advertising claims. Ads violate the law if they make false statements about a product or service, fail to disclose material information, or lack adequate substantiation. The Federal Trade Commission (FTC) has enforced the law against manufacturers of a variety of purported health products available without a prescription, such as companies that claim that their products promote hair regrowth, cure cancer, or cause weight loss. The FTC also regulates Internet-based advertising of products, including those making health claims, and the agency has conducted periodic sweeps of the Internet and sent notices warning companies of violations of the law.

The FTC has asserted its jurisdiction to take action against genetic test advertising that is false or misleading, and the agency has announced a joint effort with the FDA and NIH to identify appropriate targets for legal action. Nevertheless, the FTC’s limited resources have hampered the agency in pursuing these claims, and this limitation leads the agency to focus on claims with a high likelihood of causing serious harm to many people. Perhaps as a result of its resource shortages, the FTC appears to have taken no action against any genetic test advertisements, even those that would appear clearly false and misleading on their face.

To the extent that advertising is neither false nor misleading and the product or service advertised is legal, the government’s ability to regulate it is highly constrained. The First Amendment provides broad protection for so-called “commercial speech,” and the government bears a high burden of proving that speech is harmful and that restrictions are needed to mitigate or prevent such harms.

Some observers have proposed intervention by the FDA to limit advertising claims about genetic tests. However, the FDA’s jurisdiction to regulate claims made about a product is predicated on the agency’s authority to regulate the product itself. For regulated products, the FDA’s authority extends to claims about these products made in their labeling (and, in the case of prescription drugs, in their advertising as well). The FDA can both mandate the disclosure of risks and warnings and prohibit claims that it believes are inadequately supported by scientific evidence.

GIVEN THE HIGH STAKES INVOLVED, THE GOVERNMENT NEEDS TO CORRECT THE SYSTEMIC GAPS IN OVERSIGHT THAT RENDER VULNERABLE THE QUALITY OF ALL GENETIC TESTS AND THE SAFETY OF CONSUMERS.

The fact that the FDA currently does not regulate most genetic tests precludes review of claims made about those tests. The FDA’s lack of involvement also can affect the FTC’s response, because the FTC, in enforcing its laws against false and misleading advertising, often looks to the FDA’s labeling requirements for guidance regarding appropriate claim parameters. Thus, the absence of a designated oversight body for most genetic tests also means that there is no expert agency with clear authority to assess whether advertisements appropriately disclose all pertinent information to consumers.

Laws also could be enacted to prohibit advertising of genetic testing to reduce opportunities for patients to be confused or misled or to make inappropriate decisions based on testing. Such laws, in addition to being subject to criticism as unduly paternalistic, also could be subject to challenge on First Amendment grounds to the extent that they prohibit advertising claims that are not clearly false or misleading. Furthermore, although the FTC is currently empowered to prohibit advertising claims that are clearly false and misleading, the agency is not enforcing these laws against the purveyors of any genetic tests.

Crafting a holistic approach

Aside from such practical challenges, restricting access and advertising would not address fundamental concerns regarding the analytic and clinical validity of all genetic tests. Although it certainly is important that patients be adequately informed about the benefits and limitations of genetic tests, test quality is a threshold, and therefore more fundamental, concern. Suppressing advertising about the tests would, to be sure, limit the number of consumers who find out about the tests, and limiting direct consumer access would decrease the number of consumers who could obtain them. But neither of these potential fixes would address whether the tests are performed correctly or are supported by clinical evidence demonstrating that they correlate with current or future health status. Yet these tests can have profound consequences. A predictive genetic test—for example, one that indicates a heightened risk of hereditary breast cancer—may lead a woman to choose prophylactic mastectomy. A diagnostic genetic test—say, for prenatal diagnosis—may lead to termination of pregnancy in the absence of any corroborating medical evidence from other laboratory tests or physical examination. A pharmacogenetic test to predict drug response may lead to prescribing a particular drug at a particular dosage or, alternatively, foregoing a particular therapy.

Given the high stakes involved, the government needs to correct the systemic gaps in oversight that render vulnerable the quality of all genetic tests and the safety of consumers. The current system is fragmented and riddled with gaps. CLIA in theory requires laboratories to demonstrate the analytic validity of all tests performed, but regulations that would better ensure analytic validity for most genetic tests have yet to be implemented. CLIA has the legislative authority to establish a genetic testing specialty, but it has chosen not to do so. The FDA has the expertise to evaluate home-brew genetic tests, just as it does genetic test kits and many other diagnostic tests, but the agency lacks a clear mandate to review most genetic tests. The FDA might have the legal authority to act, but new legislation that clarifies the agency’s authority would eliminate the uncertainty and give the FDA a clear mandate to act.

These hurdles could be overcome through more effective leadership at the federal level, predicated on awareness that ensuring analytic and clinical validity is essential if genetic medicine is to achieve its promise of improving health. Regulating test quality would involve establishing and enforcing standards to ensure the analytic and clinical validity of tests before they are made available to the public and to ensure that laboratories are competent to perform them and report results appropriately. Thus, the best approach to alleviating concerns would be a system of oversight to ensure that all genetic tests, whether DTC or physician-based, home brew or test kit, are analytically and clinically valid.

Although DTC testing has been a vivid and headline-grabbing development in genetics, it would be a mistake, and ultimately an unsuccessful endeavor, to focus efforts on remedying the potential harms from DTC tests without considering the entire regulatory context. Without a system in which an upfront expert evaluation can be made with respect to the analytic and clinical validity of genetic tests, it will be difficult if not impossible to make rational decisions about who can and should order the test and receive the results and what claims are appropriate in advertising.

The time has come to shift the focus to ensuring the quality of all genetic tests. Focusing on quality would address many of the concerns raised about access and advertising and would also help to ensure the quality of all genetic tests, not just those provided directly to consumers. Although there are limits on how much the government can or should do to protect consumers, there are clear opportunities for it to provide patients and providers with greater assurance that genetic tests are accurate and reliable and to provide information that is relevant to health care decisionmaking.

Recommended reading

S. E. Gollust, S. C. Hull, B. S. Wilfond, “Limitations of Direct-to-Consumer Advertising for Clinical Genetic Testing,” Journal of the American Medical Association 288 (2002): 1762–1767.

S. E. Gollust, B. S. Wilfond, and S. C. Hull, “Direct-to-Consumer Sales of Genetic Services on the Internet,” Genetics in Medicine 5 (2003): 332–337.

S. C. Hull and P. Prasad, “Reading Between the Lines: Direct-to-Consumer Advertising of Genetic Testing,” Hastings Center Report 31 (2001): 33–35.

G. Javitt, E. Stanley, and K. Hudson, “Direct-to-Consumer Genetic Tests, Government Oversight, and the First Amendment: What the Government Can (and Can’t) Do to Protect the Public’s Health,” Oklahoma Law Review 251 (2004): 57.

Secretary’s Advisory Committee on Genetic Testing, Enhancing the Oversight of Genetic Tests: Recommendations of the SACGT (2000).

B.Williams-Jones,“Where There’s a Web,There’s a Way: Commercial Genetic Testing and the Internet,” Community Genetics 6 (2003): 46–57.

Gail H. Javitt () is a policy analyst at the Genetics and Public Policy Center and a research scientist at the Berman Bioethics Institute of Johns Hopkins University. Kathy Hudson is director of the center and an associate professor at the institute.

Forum – Winter 2006

The university of the future

In “Envisioning a Transformed University” (Issues, Fall 2005), James J. Duderstadt, Wm. A. Wulf, and Robert Zemsky have once again rung a bell that seemingly has not yet been heard at our universities. I would not term it a wake-up bell, as that was rung in 1994 with the emergence of the user-friendly World Wide Web. Nor do I consider it a fire bell, as one was first heard in 1998 with the dot-com rush that many academics undeservedly mock and deride as a fad or a colossal disaster. And it is definitively not the bell that should have sounded resoundingly with the decision by Google in 2005 to digitize all published materials into a single interactive learning site. The bell that Google rang is roughly equivalent to an intellectual liberty bell.

Why a liberty bell? What do liberty and knowledge and universities all have in common? With the advent of massive information technology as an enabler of universal customized learning and the imperative need for students to learn quickly and integrate a broad range of disciplines in a rapidly changing world, the monopoly on higher learning once held by universities will vanish. New ways to learn are emerging, new institutions for learning are on the move, and new centers for R&D are springing up in unpredictable places. All of these emerging forces for learning and discovery are being driven by advances in information technology and learning technologies and by changes in knowledge organizations themselves.

Liberty is attained here by the fact that universities, which in the past could be designed and driven by their own internal social constructs, must now move beyond their historic models of elitism and isolation from society and assume a new role: educating students to be capable of rapidly mastering and integrating a broad array of complex and interrelated disciplines. Our universities must foster intellectual flexibility, creativity, and the capacity for innovation in a global society interconnected by advances in communications technology. Universities must adapt to the emergence of empowered and capable independent learners set free by the Internet.

The new directions in which universities must move will require them to rethink bedrock organizational principles and practices. Decisionmaking must match the competitive pace of enterprise despite the fact that time in the academy is measured in semesters. Learning takes place 24/7, but academic calendars impose arbitrary restraints. The socialization process associated with learning is critical, but living/learning residential options remain limited. Knowledge is inherently interdisciplinary, yet institutions are reluctant to accommodate new departments and research centers constructed across disciplines.

These new directions will engender change and reshape both the academy and society, unleashing a new type of intellectual liberty. Both students and faculty alike will be able to exercise the freedom to adjust their learning schedules to correspond with their interests and abilities and inclinations. Many students who are presently unable to survive our 19th-century system of learning will excel with the freedom to learn in a way that matches their own intellectual and creative gifts, as opposed to the focal learning presently implemented to match the particular interests of a given set of faculty members.

Universities are wonders to behold. They are transformational catalysts for societal change, and they perform a function essential to our collective survival. At present, they engage relatively small numbers of individuals in a very structured learning process. Advances in information technology of the order outlined by the National Academies and the forces of change those advances will unleash tell me that universities will soon be free to create learning environments that offer ubiquitous access to all information. How will universities adapt to this new information- liberated world—this new stage of human evolution? Some will adapt and change and bring about a new form of learning, one yet to be designed. Others will stay their course and retain their rigid pedagogy, inflexible practices, and focused predictable outcomes. Fifty years from now those universities will appear as removed from the front lines of change as the most remote monasteries of the Middle Ages.

MICHAEL M. CROW

President

Arizona State University

Tempe, Arizona


James J. Duderstadt, Wm. A. Wulf, and Robert Zemsky’s article both illuminates and sounds the alarm about issues involving information technology’s impact on the future of higher education. Readers are reminded that faster, less expensive, but more accessible and more powerful technology will continue to change how faculty conduct research and teach, how students learn, how administrators conduct business, and how all of us interact across campuses and throughout the world.

Although the process of change may be in some ways evolutionary, the impact will be revolutionary. Unfortunately, we in higher education are all so busy extinguishing fires and pursuing resources for today that we rarely set aside sufficient time for reflection and envisioning. John Dewey, whose name the authors invoke, wrote that, “The only freedom that is of enduring importance is freedom of intelligence . . . freedom of observation and of judgment exercised in behalf of purposes that are intrinsically worthwhile.” Increasingly, we need to take the time and find the resources to do just that, reflecting not only on technology’s short-and long-term influence, but also on our response in anticipation of both certain and uncertain changes to come.

Tom Friedman has clearly thought about these issues and “gets it.” His best-selling The World Is Flat asserts that one of the most important developments of the new century is “the convergence of technology and events that allowed India, China, and many other countries to become part of the global supply chain for services and manufacturing, creating an explosion of wealth in the middle classes of the world’s two biggest nations, giving them a huge new stake in the success of globalization. And with this ‘flattening’ of the globe, which requires us to run faster in order to stay in place,” Friedman asks has “the world …gotten too small and too fast for human beings and their political systems to adjust in a stable manner?” Clearly, higher education faces the same challenge, whether we are talking about the impact of technology on the research enterprise or wide-ranging student learning issues, the rapidly changing demographics of today’s students, or students’ reliance on technology in all facets of their lives.

Among the most important issues the authors raise is that of community. Whether focusing on how we conduct research or on how and where students learn, universities will undoubtedly need to continue to think about the meaning of community and about technology’s role in strengthening our communities, with less and less emphasis on geography. There is an important caveat, however: As higher education increasingly embraces “big science,” supported by new technologies and even more highly organized structures for research collaboration funded by national agencies, we mustn’t discourage the creativity of individual investigators or underestimate the importance of human interaction both on and among campuses. The traditional benefits of such interaction will hopefully be enhanced by powerful new technologies, and universities can remain in the business of educating students and building human interaction in our society. In fact, as interdisciplinary collegial research becomes increasingly important, trust among human beings and the quality of relationships will become more important than ever. We in the academy must learn to take the time and dedicate the resources to reflect on, plan for, and adjust to new technologies and to do so in ways that place even greater emphasis on the importance of human relationships. Leadership will be critical: We must reinforce to the community that technology is not a threat but a tool for strengthening our core values.

FREEMAN A. HRABOWSKI III

President

University of Maryland, Baltimore County

Baltimore, Maryland


I read with interest your collection of essays on technology and the university. But as I listen to the battle cry for sweeping change, I find myself musing on an earlier technological revolution: the printing press. Why was it that, when the world’s knowledge became available in bookstalls all over Europe, the university lecture system did not die out? It thrived instead. I think the answer is that available texts and data don’t do away with the need for teachers, but quite the contrary. It’s not so easy, to “read.” You need guidance, discussion, examples, and analysis. So now, when great banks of data and text are available “at Starbucks,” as Duderstadt et al. note, I don’t see the end of teaching, but a crying need for it. Education teaches us what is there, what can be done with it, how to think about what it means.

To be sure, the new technology is “interactive,” unlike books. Can it, then, educate students by itself, or at least with great efficiency—say, 50 students to a class, instead of 20? I don’t think so. My evidence is in my own experience, as a longtime teacher of freshmen and sophomores at a community college, and an early adopter—and eventual rejecter—of online instruction. Communication between teacher and stu dent can be rich and nuanced in the classroom, but is usually canned, pre dictable, and dry online. Online students are excited about a technology that looks efficient, from their point of view. “Look, I can learn composition while I watch the kids and make dinner.” Online students go through the motions. Uninvolved in a human relationship, their attitude is, “What do I have to do, and how fast can I do it?”

I appreciate Susanne Lohmann’s pitch about diversity of learning styles. Indeed, these exist, and some students are happier online than others. (The dropout rate in online courses is prodigious.) But we should not be misled to think that educational technology is being introduced in order to help those who are less linear; it is being introduced, as your writers note, because it saves the college money (and profits the vendors, not so coincidentally.) Here at the community college, we are putting students online as fast as we can, because we can’t afford to build classrooms and hire faculty.

But most people like to be taught by a person. If I were Harvard, I would not run off to imitate the poorest colleges. In half a generation, a human teacher is precisely what elite and expensive colleges will have to offer. Surely some of the wealth that technology is creating in society as a whole can be captured and dedicated to the time-honored business of passing not facts, but understanding, to the very people who will do the research at the research universities of the future.

SUSAN SHARPE

Professor of English

Northern Virginia Community College

Annandale, Virginia


James J. Duderstadt, Wm. A. Wulf, and Robert Zemsky are performing a great service by raising awareness of impending major changes in higher education. I agree that the general types and magnitudes of changes described in their article are highly likely to occur. My comments are intended to extend the discussion further.

I think of technology not as a driver, as described in the article, but rather as an enabler. The real drivers for change in universities are the same larger forces that are producing enormous pressure for change in almost every other aspect of our lives. Technology becomes a critical factor only when it enables people and organizations to respond effectively to these larger forces such as politics, economics, demographics, and nature. Thus, the challenge is not “to think about the technology that will be available in 10 or 20 years” and how the university will be changed by that technology. Rather, it is to imagine how the society of 20 years from now will have changed and how the university can best meet the educational and research needs of that society. Technological innovations that help universities to meet those evolving needs will be critically important, but innovations that do not help to meet the societal needs are not likely to prosper within our institutions.

A major question for future planning is “What really is our business?” A key lesson that corporate America learned over the past two decades is that survival depends on knowing what business one is in; many corporations learned to their dismay that they were not really in the business they thought they were in. Most of us in research universities have not thought seriously about what business we are in or what business society wants us to be in. We generally accept that our business is education, research, and service, but within restricted definitions of each that are appropriate to today’s enablers. For example, we have traditionally educated primarily a set of students with a fairly narrow band of characteristics (academic credentials, age, etc.) who can and will come to our campuses. Enablers such as technology and economic and political alliances will inevitably break many of the constraints that led to this narrow model and allow society and ourselves to reconsider our educational goal and mission. In a world with greatly changed geographic or political constraints, we are likely to have a different conception of whom we have an obligation or a desire to educate.

Finally, much of the educational innovation over the next few decades probably will come from Asia, with its enormous need to provide mass higher education inexpensively, and from the for-profit sector, which sees a huge potential worldwide marketplace. This will be the real “edge of the organization” referred to by the authors, and the resulting disruptive innovation will require all of us to play a truly global role in order to compete effectively.

LLOYD ARMSTRONG

Provost Emeritus and University Professor

University of Southern California

Los Angeles, California


Cyberinfrastructure for research

In “Cyberinfrastructure and the Future of Collaborative Work” (Issues, Fall 2005), Mark Ellisman presents compelling scenarios for advanced cyberinfrastructure (CI)-enhanced science, highlighting quite appropriately the ground-breaking Biomedical Information Research Network (BIRN) project that he directs. CI is a collection of hardware- and software-based services for simulation/modeling, knowledge and data management, observation and interaction with the physical world, visualization and interaction with humans, and distributed collaboration. CI is the platform on which specific “collaboratories,” research networks, grid communities, science portals, etc. (the nomenclature is varied and emergent) are built. The full “Atkins Report” mentioned by Ellisman is available at http://www.nsf.gov/od/oci/reports/toc.jsp, and resources about the collaboratory movement can be found at the Collaboratory for Research on Electronic Work ().

A growing number of CI-enhanced science communities, like BIRN, are becoming functionally complete and the place where the leading-edge investigators in the field need to be. They are not limited to automating past practice to make them faster, better, cheaper; they are about enabling new things, new ways, and potentially broadened participation. The push of technology and the pull of science for more interdisciplinary, globally distributed, and interinstitutional teams have combined to create an inflection point in the flow of information technology’s impact on science and more generally on the activities of many knowledge-based communities.

Mounting a potentially revolutionizing advanced CI program is very complex and will not emerge solely as a consequence of technological determinism. Real initiative, new resources, and leadership are required. It requires the nurture and the synergistic alignment of three types of activity: R&D on the technical and social architectures of CI-enabled science; reliable, evolving, and persistent provisioning of CI services; and transformative use through iterative adoption and evaluation of CI services within science communities. All this should be done in ways that extract and exploit commonality, interoperability, economies of scale, and best practices at the CI layer. It will also require shared vision and collective action between many stakeholders, including research-funding agencies, universities, and industry. An even bigger challenge and opportunity is to mount CI programs in ways that benefit and connect and leverage research, learning/teaching, and societal engagement at all levels of education and in a broad range of areas, including the humanities and arts.

Arden Bement, director of the National Science Foundation (NSF), is providing much-needed leadership for the CI movement. It is unfortunate, however, that just as there has never been more excitement and consensus among global science communities that advanced CI is critical to their future, there has never been a worse environment for financial investment in NSF and basic research in general. A coordinated and truly interagency approach, leveraged by our research universities, is required to establish clear leadership for the United States in the CI movement—an essential infrastructure for leadership in our increasingly competitive, global, and knowledge-based economy.

DANIEL E. ATKINS

Professor of Information, Electrical Engineering, and Computer Science

University of Michigan, Ann Arbor


Protecting critical infrastructure

In their excellent article “The Challenge of Protecting Critical Infrastructure”(Issues, Fall 2005), Philip Auerswald, Lewis M. Branscomb, Todd M. La Porte, and Ermann Michel-Kerjan raise a number of key points. Because the “border is now interior,” U.S.-based businesses, perhaps for the first time in America’s history, find themselves on the front lines of a global battlefield. And because the economic infrastructure is largely privately owned, its protection depends primarily on “private-sector knowledge and action.”Yet their conclusion that there are insufficient incentives for private-sector investment in security may be premature.

It is instructive to remember that 25 years ago, U.S. business thought of quality as an unaffordable luxury, rather than a core business process with the potential to reduce cycle times and production costs and create competitive advantage. Like the quality inspectors of two decades ago, security managers are often seen as company cops, rather than global risk management strategists. Security is rarely designed into the company’s training programs, engineering, operations, or organization. But as the authors point out, the “Maginot Line” approach to security is both expensive and breachable.

The Council on Competitiveness’s Competitiveness and Security Initiative, led by Charles O. Holliday, chief executive officer of Dupont, and Jared Cohon, president of Carnegie Mellon University, found significant missed opportunities in achieving higher security and higher efficiency together. The initiative is identifying a number of areas from which to calculate a return on security investments: gains in productivity across the entire operation, reductions in losses heretofore tolerated as a cost of doing business, new revenue streams that flow from products and processes that embed security, and enhanced business continuity and crisis recovery. It has also identified less quantifiable but equally critical business benefits from security, including reputational value, shareholder value, and customer value.

Unfortunately, we have also found that most companies are not organized to identify these opportunities or capture a return on investments in security.

Security is not fully integrated into the business functions of strategic planning, product development, business development, risk management, or outsourcing management.

In many sectors, there is little operational business management resident in the security departments to enable a “build it in, don’t bolt it on” approach and little security expertise in the business and engineering units.

Unlike other core business functions, the roles and responsibilities of the security manager vary widely from industry to industry and company to company, often being dependent on personal rapport with the chief executive officer or the board of directors’ awareness of security challenges. There are few metrics to calculate the return—anticipated or real—on security investments.

The authors rightly note that in this new threat environment, “rigid and limited public-private partnerships must give way to flexible, more deeply rooted collaborations between public and private actors in which trust is developed and information shared.” From the Council on Competitiveness’s vantage point, however, what is also needed is an economic value proposition for increased security, new metrics, new organizational structures, and new technological options; and, most important, the visionary business leadership to implement them.

DEBRA VAN OPSTAL

Senior Vice President

Council on Competitiveness

Washington, DC


Can we anticipate disasters?

“Flirting with Disaster” (Issues, Fall 2005) by James R. Phimister, Vicki M. Bier, and Howard C. Kunreuther lays out several issues confronting high-hazard enterprises and regulators vis-a-vis the precursor analyses meant to help them ward off operational threats. Underlying those issues is the question of the scope and quality of the event analyses that delineate the precursors.

To develop more robust event analyses in any high-hazard industry, a first step is to recognize that precursor reporting systems’ effectiveness depends on the extent to which boards of directors, executives, and senior managers define these local efforts as being as structurally necessary to meeting their public responsibilities and their financial goals as any other basic production activity, and allocate intellectual and financial resources accordingly. But when those resources are understood in accounting terms to be “indirect” administrative contributions to production, the tendency is to minimize them. The scope and depth of event reviews suffer.

AS IN THE THE DESIGN OF THE U.S. GOVERNMENT, A FUNCTIONAL DIVISION OF LABOR PLUS ELEMENTS OF CENTRAL AND LOCAL CONTROL MUST BE SET UP TO CREATE AND ENSURE VARIOUS CHECKS AND BALANCES.

A second precondition is to reconsider processes for evaluating precursors’ risk significance. In nuclear power, accident sequence precursors and probabilistic risk analyses rely on global scenarios of meltdown threats; because these cannot account for local component interactions and dependencies introduced by upgrades and maintenance, their value as reference points is limited. To validate precursors’ risk significance requires statistical analyses that are unlikely to take into account contexts other than those of material, mechanical, and physical systems. The cultural, economic, political, and cultural systems that deep analyses find as the precipitating contexts of major accidents, despite being well-recognized locally as “error-forcing conditions,” have not been regarded as being risk-measurable. At the least, it is possible to make these measurable and reportable by incorporating so-called “subjective” measures, which are already widely used in risk modeling and are derived from validated methods of eliciting and structuring expert opinion. Formalized reporting and trending systems built up out of aggregated data would then also reflect such substantive evidence.

A third precondition is to reexamine the sources and meanings of “complacency,” a catch-all precursor. The absence of curiosity and doubt that it implies may, however, be a consequence of the well-observed insularity of those in high-hazard industries. Hermetic systems of language and talk, resistance to outsiders’ ideas, consultants and contractors who play “NASAchicken” (not being first to bring up an issue), internal turfs, and judging the credibility of knowledge by organizational rank—all these and others fence out diverse perspectives and new questions.

Locally, “expert opinions” should also include those of creditable outsiders. Globally, the rare informal discussions of issues in joint meetings of industry executives, technical experts, and social, political, and behavioral scientists concerned with the many facets of high-hazard enterprises (biotech, chemicals, medicine, nuclear power, security, and transportation) need to be shaped into a permanent forum for amplifying the fund of intellectual capital being drawn on to stay ahead of disaster, for their sakes and ours.

CONSTANCE PERIN

Visiting Scholar in Anthropology

Massachusetts Institute of Technology

Cambridge, Massachusetts

www.constanceperin.net


“Flirting with Disaster” raises important issues. Having spent years of my professional career analyzing the Nuclear Regulatory Commission’s systems and working to improve them, I have a few comments.

I would argue, based on the analyses I have done, that the number of reported incidents is a meaningful indicator and efforts should be taken to reduce incidents that are reported. Every incident that occurs is an alert to the system and a sign of stress that if not addressed can lead to cycles of decay.

The aim of all reporting systems should be nearly error-free operation. If the number of reported incidents becomes too high, too many resources get diverted to fighting fires, which leads to negative cycles of more incidents, fewer resources to devote to them, ever more incidents, and so on, until the system itself can become unglued.

I would say that it is not a question of having either a centralized system like nuclear power or a decentralized one like the airlines. Rather, both are needed, and the real character of the systems in both industries has elements of both centralization and decentralization. To catch error and correct it, multilayered redundant systems are necessary. As in the design of the U.S. government, a functional division of labor plus elements of central and local control must be set up to create and ensure various checks and balances.

The system has to be carefully thought out as to how the different parts relate, but it also must be left somewhat open-ended and decoupled in places, because a good incident-reporting system thrives on both a high degree of discipline and order among the parts and on some disorder for dealing with unexpected contingencies. An overdesigned and overdetermined system has many positive attributes and is likely to function better than one that it is underdesigned and underdetermined, but some degree of underdesign and underdetermination is still needed to deal with the surprises that inevitably take place. Ideas must flow freely; the system cannot be too controlled.

There also must be multiple means for accomplishing the same purposes: a requisite amount of redundancy and even overlap. To bring up important issues and afford them the attention they are due, some degree of conflict or tension among competing units with opposing missions is also sure to be needed. Other keys to a good system include theories for classifying events and how they can lead to serious accidents, methodologies for analyzing these events, the will to take corrective action and to overcome political obstacles, and a culture geared toward safety and not risk-taking.

More research on this important topic is needed, and “Flirting with Disaster” is a good start.

ALFRED MARCUS

Spencer Chair in Technology Leadership and Strategy

University of Minnesota

Minneapolis, Minnesota


I very much liked “Flirting with Disaster,” which provides a great deal of useful information in a small space. Here are some reflections that supplement its insights.

Some years ago I got a call from the director of a nuclear power plant, whom I had met at a conference.“Can you tell me,” he asked, “what I can do to make sure we don’t miss the faint signals that something is wrong?” I had no answer for him then, though much of my adult life has been spent working on similar issues. In the months that followed, however, I would work on this question from time to time and eventually felt I could give him a good answer. I finally wrote a paper called “Removing Latent Pathogens,” that summed up my thoughts on receiving those faint signals. What follows is built on the conclusions of that paper.

First of all, it is important to secure the alignment of people in the organization. This means that employees feel that they and others are on the same team. This is a fundamental prerequisite for insuring that there will be a report if the employee sees something amiss. Before the Hyatt Regency disaster, the workmen building the hotel had learned to avoid the walkways that would later prove fatal to so many people. But if they brought their concerns to higher authority, there is no record of it. They didn’t feel it was their duty or perhaps their place to comment on it. They weren’t on the same team with the hotel guests. Creating a culture of openness, what I have called a “generative” organization, helps to make even minor employees feel that the door is open to the observations of anyone in the organization, or even to contractors. You never know who will spot the problem. It goes without saying that treating employees with the utmost respect is a key part of this culture.

Second, we need to be sure that the employee has as big a picture of the organization and its tasks as we can afford. The technicians who were responsible for putting shims in the mounting of the Hubble Space Telescope had no idea that they were creating a billion-dollar problem. When an inspector tried to get in their lab, they locked the door and turned up the music. Encouraging employees to get additional training and familiarizing them with the work of other departments help in building this big picture. Being able to understand the implications of what they see and do makes them better able to spot a potential problem and to report situations that they suspect can cause trouble. It is often the department-spanning employees who see the things that others don’t. A common consciousness of what might constitute a danger is also worth cultivating. Before the “friendly fire” accident when two U.S. Army Blackhawk helicopters were shot down over northern Iraq, there had been a “dress rehearsal” a year and a half before. But after that dress rehearsal, which within seconds produced a “fatal” result, no one picked up the phone and reported the incident. And so in the second case, two helicopters full of American troops died.

Managers need to train themselves in the art of being responsive to those who do not seem to be experts. I notice that in the recent attempt to change the culture of NASA, organization members were taught to engage in “active listening.” How successful this was I have no idea, but there is no question of its historical importance. Charles F. “Boss” Kettering once spent time talking to a painter who thought he could see that the propeller on one of the ships bearing a Kettering-designed diesel was off by a half an inch. This was a big propeller, and God knows how the painter could see it, but Kettering called the design office and had the propeller checked, and sure enough, it was off by a half-inch.

Finally, there is the issue of empowerment. The ability to contemplate a problem is often associated with the ability to do something about it. During the last, disastrous flight of the space shuttle Columbia, a team of structural engineers sought to get photos of the shuttle in space. In spite of Air Force willingness to provide such photographs, higher management did not want to ask for them. One reason, apparently, was that no way of fixing a seriously damaged shuttle in flight was known, so why bother to find out about it? When people feel they are powerless to act, they often appear powerless to think. But empowering people encourages them to look, to see, and to explore. Enrico Fermi, whose team built the first nuclear power assembly, called this “the will to think.” The will to think comes when workers expect their plans and efforts to be fruitful. Tie their hands and you close their eyes. If these insights are correct, what can we do to encourage those who shape our organizational cultures to provide a good climate for information flow? It always amazes me to discover that many managers have no idea how things they do act to discourage information flow. But even more broadly, that they have no idea that information flow is faulty in their organization and have never bothered to improve it. Many case studies suggest, on the contrary, that overt efforts to improve information flow are often very useful. Hopefully, the active listening and other skills that NASA managers were taught will help avoid another Columbia tragedy. But even if not, the rest of us can do much to build loyalty, educate our employees, and give them the power to change things for the better.

RON WESTRUM

School of Technology Studies Eastern Michigan University Ypsilanti, Michigan


This is an important set of ideas, and I applaud James R. Phimister, Vicki M. Bier, and Howard C. Kunreuther for writing about them in such an interesting way. Their article couldn’t be more timely. What counts as a precursor to failure is a topic that pops up in various literatures. I would add the following observations.

From their own description and from other work I’ve read, it appears to me that voluntary reporting of potential precursors is more effective than command-and-control approaches. Because of that, I’m unconvinced that “in some cases mandatory reporting may be preferable.” The problem is not in requiring reporting, but that the way it is structured loads the system so that the reporter has a very strong interest in shaping a report so that responsibility is deflected. So I think a next step in developing this work is to look closely at the problem of individual and organizational incentives to tell precursor stories in particular ways.

Along these lines, the conclusion that “it is the responsibility of the private sector to be an engaged partner” seems weak to me. I would add to the overall argument in “Flirting with Disaster” that managers, and the organizations they try to control, often see it as in their interests not to see precursors. The article hints at this in pointing out that “not actively trying to learn from [precursors] borders on neglect.” This is a point that could be developed productively.

Voluntary and mandatory reporting systems are set up as alternatives. But the authors see positives and negatives with each. Could we imagine a system that combines the best of both, creating a third way toward more safety?

LEE CLARKE

Department of Sociology

Rutgers University

New Brunswick, New Jersey


Carbon sequestration

“The Case for Carbon Capture and Storage” (Issues, Fall 2005) paints a very rosy picture of the technology’s long-term potential and advances a vigorous argument for investing in projects under the assumption that CO2 levels need only be stabilized over the next 50 years. Jennie C. Stephens and Bob van der Zwann argue that carbon storage will facilitate the deep reductions necessary to save the world from climate change just when aggressive reductions are required.

Carbon capture and storage (CCS) might be an option in the future when all the questions have been answered and problems fixed, but the world cannot wait 50 years before it tackles climate change as the authors suggest. We urgently need dramatic emissions reductions over the next few decades, on the order of 50% by mid-century, if we are to avoid the worst, irreversible impacts of climate change. Given the unresolved issues with CCS, it would be incredibly risky to assume that it will be possible to drive large reductions in carbon emissions with the technology.

We agree with the authors that there should be continued evaluation of the technology. However, we do not agree that it is clear at this point whether CCS will pan out as a part of the solution to climate change. Furthermore, the recent Intergovernmental Panel on Climate Change (IPCC) report on CCS makes it clear that the technology can be only a part of the solution.

In the end, everyone, including the recent IPCC Working Group, agrees that economics will determine whether CCS technology ever moves beyond the demonstration phase. However, the authors fail to mention the most logical economic driver for CCS technology: a mandatory cap-and-trade program similar to the one proposed by Sen. McCain or the European Trading System being used to implement the Kyoto Protocol. Only tough mandatory caps like these will create the economic space necessary for the advancement of sequestration, which the IPCC report pegged at $25 per ton of CO2.

The authors gloss over a number of important environmental issues with this technology. For example, the promotion of coal gasification—an integral part of CCS—will boost mountaintop mining in the eastern United States by eliminating the preference for low-sulfur Western coal. And the use of oceans as a “natural storage system” would accelerate ocean acidification and further harm already failing marine ecosystems.

CCS is not currently a cost-competitive and safe way to achieve large-scale reductions in global warming pollution. It may become one in the future, if its long-term safety is proven. But until that time, it is much more prudent to aggressively promote renewable energy and energy efficiency and not pretend that a technology exists that will facilitate the burning of all the world’s reserves of coal and oil.

JOHN COEQUYT

Energy Policy Specialist

Greenpeace USA

Washington, DC

GABRIELA VON GOERNE

Climate/Energy Unit

Greenpeace Germany

Hamburg, Germany


Energy research

In the past years in the mass media, as well as technical and professional journals such as Issues, a deluge of articles about energy has appeared. Virtually without fail, each has contained the same panacea prescription for whatever magic-bullet approach to solving the energy problem it is are pushing: government financial support in its various forms, be it direct subsidies, tax incentives, earmarked government grants, government programs, etc. These articles also never fail to appeal to national pride or predict national economic doom in the international arena, because if such federal government support is not forthcoming in sufficient massive amounts, we will fall behind other nations in saving the world with enlightened policies and technological prowess. In the fall 2005 Issues, such an article appeared concerning carbon dioxide capture and sequestration (not to take issue with sequestration itself, as it seems to hold much promise).

Along with this cacophony has come almost universal criticism of Vice President Cheney’s energy task force and the recent energy legislation coming out of Congress. Doesn’t it occur to anyone how ludicrous it is to condemn the abysmal performance of government and at the same time lay all responsibility for solutions at the feet of that same government? If one were to submit nominations for groups that were the least knowledgeable in these matters, the most beholden to parochial interests, and the most prone to be least objective in what they believe, whom would you suggest? How about Congress and the general public? Yet these are just the targeted groups for all the nostrums.

Could it be that the mass confusion and lack of coherent action in this whole area are due to the community of technologists, economists, and political scientists, who are ostensibly the most qualified to develop an effective national strategy? They are too consumed with philosophical and ideological turf battles, not to mention mud-wrestling over government funds, to actually undertake the interdisciplinary interaction whose virtues they extol. But who knows? If the community could get its act together in some civilized way, they might convince Congress to do something intelligent that had the prospect of being constructive and effective.

It’s just a suggestion, but perhaps the community could produce some leadership in promoting some ecumenical conclaves where the various disciplines try to suppress their egos and ideologies and talk to each other and try to develop just such a national strategy of environmentally friendly energy independence. Heck, Issues could even consider devoting an issue to this topic. I suggest that the goal of such gettogethers might be to realistically assess the potential contributions, feasibilities, risks, and economic viabilities of the various approaches to energy supply, with due attention to the estimated time frames and uncertainties involved. What is needed is a strategy that can realistically be implemented sometime within the next 25 to 50 years. Symbolism and turf battles won’t get it done.

This would seem to be a necessary first step before Congress has enough objective information to determine what government support would be most effective and, not to belabor the point, cost-effective. My own impression is that the government can be most useful in the area of regulation rather than as the first resort for funds. But then that’s just me.

CLAY W. CRITES

West Chester, PA


IPM revisited

In “Integrated Pest Management: A National Goal?” (Issues, Fall 2005), Lester E. Ehler opens the door to further debate on whether IPM (integrated pest management) has “been implemented to any significant extent in U.S. agriculture.” Although IPM’s intent to reduce pesticide use is well-meaning, despite being promoted for more than 30 years by various private and public groups, it is questionable whether pesticide use has declined significantly for many crops.

On our own field crop farm in California, we continue to rely on farm chemicals as we have for the past 20 years. We monitor our crops for pests and spray as needed. Sometimes this works and we avoid sprays, which makes monitoring seem like a good approach, but in the long run our pesticide use hasn’t changed much over the years. In fact, our pesticide use has probably increased recently with the introduction of a new weed (perennial pepper-weed) that is difficult to control.

If we are to reduce pesticide use, the current IPM approach seems too narrow. Now may be a good time to broaden our scope and focus on integrated crop management, which takes into account the whole farming system. This would include nutrient, soil, and water management; plant variety selection; and landscape effects on pests, as these all affect pest control needs on farms. A good place to start would be the American Society of Agronomy’s Certified Crop Advisor voluntary program, which ensures that participating members have experience and education in multiple disciplines and behave ethically. Ideally, crop advisors should not be able to profit from selling farm chemicals.

Having trained professionals who understand the whole cropping system and do not profit economically from selling chemicals will help provide growers with more options for alternative pest control strategies. However, there is also a need for more research on the basic biology of many of our crop pests. On our own farm, I frequently wonder where many of our major pests overwinter, what their natural enemies are, and whether they have alternative hosts. If we knew more about them, perhaps we could find weak links in their system to control them without the use of pesticides.

There is a great need to find and implement alternative pest management practices on farms that will reduce our dependency on farm chemicals. Pesticides are expensive, economically and for public health and our environment. A good start toward reducing pesticide use would be to broaden our scope of pest control to include all aspects of crop production.

RACHAEL LONG

Long and Son Farming

Zamora, California

To Blog, or Not to Blog

“I’M HOME FROM HAVING A COLONOSCOPY—everything went fine, but I think I’ll let the drugs leave my system for a while longer before doing any serious blogging.”

—Instapundit (Glenn Reynolds) 12/5/05, 11:19 am.

To be fair, this is not a typical Glenn Reynolds opening, but when I decided to visit a few of the most popular blogs as a warm-up for writing this piece, this was the first line that I read. It also illustrates one of the most commonly heard criticisms of blogs: that they are self-centered and self-indulgent. Readers who do not like blogs wonder why anyone would be interested in reading half a dozen daily notes from a stranger on disparate subjects. Does anyone have that many stimulating or insightful thoughts?

What Reynolds, as well as Andrew Sullivan, Mickey Kaus, Joshua Micha Marshall, and others with blog titles such as Wonkette and Political Animal, do provide is an identifiable personal response to the news. Just as many New York Times readers want to know how Paul Krugman, Thomas Friedman, or David Brooks react to events, many blog readers like to have their information filtered by someone whose judgment they trust.

Readers also value the timeliness, brevity, and informal style of many blogs. Blogs can convey a sense of honesty and active involvement. Blog authors have no time for “recollection in tranquility” or the clever crafting of a George Will or Maureen Dowd. Blogs contain hot new thoughts, devoid of artifice and enriched with a heavy dose or irreverence.

Blogs are also independent and democratic. For those who are suspicious of the objectivity of corporate-controlled media, it is reassuring that no suits are looking over bloggers’ shoulders or asking them to consider the reaction of advertisers and stockholders. And with blogs, anyone can express an opinion without having to pass all the tests necessary to capture one of those exceedingly rare sinecures on a newspaper’s op-ed page.

Whether one likes them or not, blogs are lively and influential—and particularly attractive to generation 1.0. In a world in which time appears to be careening downhill, blogs sometimes seem to be the only form of publishing capable of keeping up. Knowing that millions of people are visiting these sites every day, sometimes several times a day, can be discouraging to someone who works at the 19th century pace of a quarterly journal.

On the other hand, blogs have some obvious liabilities. First, there are too many of them, and each of them has too many postings. Who has time to keep up? I am certain that many of them include something worth reading from time to time, but how much time should I have to spend clicking through the blogosphere and wading through the drivel and trivia to find these gems?

Although some bloggers had established reputations as scholars or journalists before they became bloggers, many arrived out of nowhere. When trying to sift through this avalanche of information and commentary, one has to stop to ask: “Who are these people? Why should I listen to them?”

For readers with enough time and dedication, it is possible to find a few bloggers worth listening to regularly. And if one finds a blogger who seems reliable, that blogger probably provides links to 30 or 40 fellow bloggers who are worth reading. By visiting all of those blogs, one can eventually find a handful of bloggers to follow. Still, keeping up with even a couple of blogs is time-consuming, and I can’t say that I’ve found any blog that I want to read every day, never mind a couple of times a day. Time is limited, and so is everyone’s supply of brilliant insights.

Of course, blog lovers do not mind wasting a little time because even a mediocre blog might be entertaining. But you are reading Issues in Science and Technology, and I have come to the painful realization that our readers are, demographically speaking, not among the most fun-loving quartiles. It appears that conducting research, writing books and articles, managing companies, working in Congress, or trying to influence policy takes its toll on your impish, fun-loving spirit. In fact, much of the irreverence found in blogs is probably aimed at you. Besides, even if you do enjoy a little anarchic fun now and then, you don’t have a lot of spare time to look for it.

With the pros and cons of blogging in mind, Issues is going to launch an experiment in its own form of blogging. Blogging 2.0 will be brief and timely, but it will come from experts who do not have the luxury of facile irreverence. Rather than having one person spout off on any and all topics, we will have a team of bloggers who will each focus on the areas they know best. Rather than writing numerous reports each day, our bloggers will post only once a week. There will be a fresh blog each day, but the blogger will differ from day to day during the week. The bloggers will have a recognizable point of view, but it will emerge from their knowledge rather than their attitude. They will be engaging writers, but they will win your attention with insights, not insults.

The bloggers will start appearing at www.issues.org sometime in January, when we launch a new Web site design. In addition to the blogs, the redesigned site will include a search engine that will make it easy to research a topic by first finding all relevant Issues articles, then all relevant National Academies publications, and then the rest of the online world. If the blogs are as addictive as we hope they will be, and the search engine fires on all cylinders, our hope is that Issues will be a gateway to science, technology, and health policy. It will be a place to catch a quick glance at the day’s policy news and debates or to settle in for a thorough exploration of a subject. Please click on over to sample these and some other new features on the site.

Scientizing politics

The Republican War on Science offers a catalog of Republican-led confrontations with mainstream science, ranging from attacks on evolution and denial of climate change to the stacking of government advisory committees with industry scientists and the blocking of federal funds for stem cell research. As an unapologetic critic of the Bush administration, I was eager to read a penetrating political analysis of how the current regime has sought to wring partisan advantage from the complex and difficult relationship between politics and science. Alas, what I found was a tiresome polemic masquerading as a defense of scientific purity.

Author Chris Mooney asserts in the book’s earliest pages that he is out to defend science, not to advance a political agenda: “Except to take stances against inappropriate legislative interference with science and to advocate a strengthening of our government’s science policy apparatus the text takes no position on questions of pure policy [emphasis added].”

Yet Mooney betrays this claim from the book’s opening salvo, which he aims at President Bush’s decision to restrict federally funded embryonic stem cell research to existing stem cell lines. “Bush’s nationally televised claim— that [there were] ‘more than sixty genetically diverse’ embryonic stem cell-lines . . .—counts as one of the most flagrant purely scientific [emphasis added] deceptions ever perpetrated by a U.S. president on an unsuspecting public.” The actual number of available stem cell lines turned out to be considerably less, probably about 22. Consequently, opportunities for federally funded embryonic stem cell research are more limited than the president had indicated.

Mooney is claiming that the president’s sin against the commonweal lies in the exaggeration of the number of stem cell lines available for research, and has nothing to do with what those stem cell lines might represent. But why would the “unsuspecting public” care about the number of stem cell lines? Obviously, the real point of contention is the fact that the president acted to restrict a type of research that some people find desirable; the subtleties of cell line counting are secondary to this action. And one’s views on stem cell research reflect value judgments about the moral status of the embryo and the moral claim of people who might, in the future, be cured by stem cell therapy. Mooney’s insistence that he is simply protecting the purity of science thus collapses on the very first page.

Mooney tells a story of bad, duplicitous, politically motivated scientists and policymakers on the Republican side, and good, honest, disinterested scientists and policymakers on the Democratic side. He certainly offers up plenty of convincing detail on the pecuniary, ideological, and religious commitments of scientists who support the conservative agenda. Yet the commitments of those on the other side—on his side (indeed, for the most part, on my side)—are almost never discussed. For example, Mooney exposes the financial support that the hydrocarbon industry and conservative think tanks have provided for scientists who are skeptical about climate change, yet he identifies Michael Oppenheimer only as a “Princeton University climate expert” with no mention of Oppenheimer’s many years spent working for the advocacy group Environmental Defense.

And so, whereas the Republicans have their values and interests (and occasionally even some aspects of their personalities) aired, the Democrats are nothing but stick figures. But don’t we Democrats deserve to have our values and personalities explored? Shouldn’t we be proud to proclaim that we are motivated by a belief in, say, the positive and assertive role of government in protecting the environment and health, or our suspicion that corporations might put their desire for profitability above their concern for public well-being?

I guess the point is that this is a War on Science, and whereas war is prosecuted by humans (in this case, intellectually bankrupt enemies of science), science is, in Mooney’s portrayal, supposed to be an activity whose product—knowledge—is independent of the values of those who practice it. Yet Mooney never confronts the reality that scientists on his side of the fence must have values, interests, and personalities just as surely as those on the other side, whom he portrays as consistently corrupt. There can be only one of two reasons for this neglect. Either Mooney has chosen not to portray the values of scientists who line up on the Democratic side because he knows it would weaken his argument and undermine his claim that he is only defending the purity of science, or he actually believes that the scientists on his side are uninfluenced by their values and interests. The reader must therefore decide if the narrator is unreliable or just hopelessly naïve.

Mooney does not appear naïve. He takes pains to show that he understands the complexity of producing and applying scientific knowledge in politically contested arenas. For example, after several chapters devoted to Republican assaults on the science behind environmental and health regulations, Mooney offers this defense of regulatory science: “Many of the studies conducted to determine the appropriateness of government regulatory action cannot proceed under the same circumstances that govern [academic research]. Time and resource constraints—as well as the difficulties of conducting science at the edges of what’s known . . .—often mean that policy-oriented scientific research is of a different nature.” He amplifies this portrayal in a discussion of the controversy over water management in the Klamath River basin: “In the face of scientific uncertainty and insufficient evidence, the agencies exercised their professional judgment about how best to proceed to protect endangered species. …There wasn’t a lot of good evidence to go around, period, and the agencies did the best they could. They certainly didn’t abuse science in any way.” Of course not; the pure of heart love science only for itself.

Yet the very imperfections in science that Mooney must highlight to protect his notion of purity create a political hole big enough to drive a truck through, and the truck that the Republicans are driving is: the ideals of pure science! The conservative “sound science” movement demands that regulatory science satisfy all the sacrosanct canons of the scientific enterprise—peer review, reproducibility, empiricism, transparency, openness. When real-world science falls short of this ideal, as it must, the antiregulatory zealots can dismiss it as “junk science” that is not good enough to justify regulation. Mooney paints this tactic as an abuse of science, but it’s not political conservatives who made up the ideals; they took them from the mainstream scientific community, which since the time of Bacon and Descartes has used them to claim a special place in society for both scientists and the knowledge that scientists produce.

Mooney thus pushes the reader’s nose into a dilemma that annihilates the book’s fundamental premises: If science in the policy arena is not to be measured against this ideal standard, then what alternative measure shall take its place? Whether one views imperfect knowledge as a reason to avoid environmental action (the “junk science” perspective) or to embrace action (the precautionary approach), the choice reflects one’s values, with purity nowhere to be found.

Indeed, throughout the book, Mooney’s polemical fervor blinds him to the political content inherent in all discourse that connects science to human affairs. Returning to embryonic stem cell research, Mooney excoriates the opposition for claiming that “so-called adult stem cells” are a scientifically suitable substitute for those derived from embryos. “Only a political movement that truly disdained science would embrace the stunning fictions of the ‘adult’-stem-cell-only crusade.” Apparently he is unaware that Germany, acutely conscious of its post-World War II responsibility to demonstrate an unconditional respect for human rights, has prohibited the destruction of embryos for research, while allowing and encouraging research on adult stem cells. Does this mean that Germans, like Republicans, must disdain science?

Finally, just as Mooney cannot seriously discuss the ways in which Democrats might use science to advance their own values, neither can he consider the possibility that values are part of the scientific enterprise itself. When he finally turns to what ought to be his easiest target, the fight against evolution, his one-dimensional view of the world can only reveal the obvious: that intelligent design is not really science and that its purveyors seek to insert a culturally conservative religious agenda into the teaching of science. He cannot acknowledge that science brings with it a world-transforming cultural agenda of its own—embodied in the modern notion of progress—for that would reveal that belief in evolution is associated with a value system far more powerful than that of religious fundamentalism.

In the end, Mooney’s desire to distance himself from such a value system simply replays the Democratic electoral defeat of 2002. In Mooney’s world, Republicans may be ruthless ideologues, but Democrats are soulless ciphers. From this perspective, it is easy to understand why Republicans won the election: at least they actually seem to believe in something. Were Karl Rove to read this book, I suspect he would be comforted.

Bad Fiction, Worse Science

Michael Crichton has achieved celebrity status as a novelist, film director, and television producer/series creator. Trained as a doctor, Crichton never pursued a medical career but instead successfully combined his interest in science with a talent for storytelling. His novels and other productions frequently begin with some scientific underpinning—dangerous organisms brought to earth by space capsules in The Andromeda Strain; dinosaurs restored to life from fossilized DNA in Jurassic Park. In most of his novels, he envelops this scientific content in the now-classic formula of a modern technothriller: starkly defined heroes and villains; Earth or some large part of it at risk of destruction; and beautiful, intelligent, available women saved from death by even more able and heroic men. Crichton’s novels attract many readers who take pleasure in reading understandable explanations of cutting-edge science and technology in the sugarcoating of a mass-market thriller.

In his new novel State of Fear,Crichton retains most of the formula while adding a heavy-handed political message. The scientific content is provided by a running debate on the seriousness of climate change. However, in this case the threat is not from nature or technology run amok, but from a gang of ecoterrorists who attempt to deploy sophisticated technology to simulate natural disasters in an effort to increase media coverage and public fear of the risks of climate change. The ecoterrorists turn out to be in the employ of a national environmental law organization whose leaders are knowingly making fallacious or exaggerated claims about the danger of climate change. Dependent on a “state of fear” to meet the growing financial needs of the organization, the group’s morally bankrupt leader resorts to high-tech terrorism. Fortunately, the evil plot is foiled in classic potboiler fashion at the last minute by a noble trio: a former academic turned secret-agent superhero, a wealthy contributor turned skeptic, and a beautiful female associate.

In the course of telling the story, Crichton paints a picture of climate science that is one-sided, error-ridden, and undeserving of notice from experts in the field. But Crichton sees his commentary on climate science as much more than a backdrop to an adventure story. He incorporates graphs and references to scientific articles into his narrative and ends the book with three annexes: an “Author’s Message,” that lays out his views on environmental policy in general and climate change in particular; an essay titled “Why Politicized Science is Dangerous” that (I am not making this up) compares the history of the eugenics movement and its abuse by the Nazis with the alleged manipulation of climate science; and a 20-page annotated bibliography on environmental science that recommends books and articles that are skeptical about the seriousness of environmental dangers but says nothing about the reports of the Intergovernmental Panel on Climate Change, the distinguished international body formed to produce consensus reports on climate science.

As numerous reviews have noted, Crichton’s perspective can be seen as a counterpoint to last year’s movie The Day After Tomorrow, in which rapid climate change imperils much of the United States. Judged purely as entertainment, both the book and the movie achieved modest success. The film generated a few fundraising events for environmental groups, but there was little if any effort to present The Day After Tomorrow as a serious scientific statement. In contrast, Crichton has been treated as if he actually possessed a deep understanding of climate science.

He received respectful attention from the self-styled defender of “unconventional wisdom” John Stossel on national television; was featured as a speaker by the prominent Washington, DC, think tank the American Enterprise Institute; and was invited to testify on climate change science before the Senate Committee on Environment and Public Works. The committee chairman, Senator Imhofe of Oklahoma, who has stated that global warming is the “greatest hoax ever perpetrated on the American people,” called State of Fear required reading for the committee. Although congressional testimony seldom makes national news, the New York Times carried a story on Crichton’s testimony.

The author stakes his claim to scientific legitimacy on the basis of extensive reading. In a tellingly pretentious opening to the “Author’s Message,” he notes that “I have been reading environmental texts for three years, in itself a hazardous undertaking.” From this reading he justifies several sweeping pronouncements, beginning with “We know astonishingly little about every aspect of the environment from its past history, to its present state, to how to conserve and protect it.” Although that is undoubtedly true about Crichton himself, it is a bit presumptuous for him to speak on behalf of the entire scientific community.

Although Crichton takes great pride in his own erudition, his dissertations on climate change are unimpressive to anyone with even an introductory familiarity with the literature. Two points highlighted with figures and footnotes are indicative. The first is an observation often cited by climate skeptics: average global temperatures fell between 1940 and 1970 despite rising CO2 levels. But climate scientists know that CO2 levels are only one of many factors that influence Earth’s climate and that factors such as sulphates and aerosols in the atmosphere that induce cooling can for a time offset the heating caused by increased CO2. Another argument that Crichton must find particularly compelling (because he repeats it so often in the book) is to refer to temperature and sea level measurements at specific places where the local climate is becoming cooler. This would be a relevant argument if global warming meant that the climate would be become warmer everywhere in a consistent pattern (as one character states, “That’s why its called global warming.”). Of course, when scientists talk of global warming they are referring to mean surface temperature across all of Earth. Some places can become cooler even as global warming advances. (A thorough discussion of the science in State of Fear can be found at www.realclimate.org.)

Not satisfied with “debunking” climate science orthodoxy alone, Crichton inflates his argument to encompass a critique of the alleged biases, ignorance, and manipulation of the public through fear that he claims are endemic to organized environmentalism. He characterizes environmentalists as “ideologues and zealots” who indulge in “the intermixing of science and politics” under the direction of leaders “oddly fixed in the concepts and rhetoric of the 1970s.” Almost everything done in the name of the environment was based on questionable or erroneous science: banning DDT was “arguably the greatest tragedy of the twentieth century”; national park management is based on a “history of ignorant, incompetent, and disastrously intrusive intervention”; and banning chlorofluorocarbons harmed Third World people by eliminating cheap refrigerants, which resulted in their food spoiling more often and more of them dying of food poisoning.

In a revealing series of proposals concluding his novel, Crichton proposes several measures to restore honesty and integrity to environmental science and advocacy, including making scientists blind to their funding, creating multiple teams to do competing approaches to major policy-oriented research, prominent labels to indicate when scientific results are based on modeling rather than empirical evidence, and publication of peer reviews with scientific articles to “get the journals out of politics.” He also proposes a renewed focus on technology assessment and creation of a new environmental organization based on the concept “study the problem and fix it” that will be unafraid to upset the status quo. Crichton’s discussion of these ideas as new and untried is one of many signs that three years of reading were insufficient to turn Crichton into an expert on science policy. He might want to return to the library to study the history of the Office of Technology Assessment (OTA) as a model for the kind of neutral review and reporting function that he so strongly advocates. Perhaps his hubris will be shaken a bit when he learns that many of the members of Congress who are praising his book voted in 1995 to terminate this valuable experiment in attempting to separate science from politics.

Most likely, both State of Fear and The Day After Tomorrow will quickly disappear from public consciousness. However, the use of celebrities to address politically controversial scientific issues is a more lasting and troubling development. The role of celebrities as advocates for social causes is an established practice across the political spectrum. Hollywood stars and professional athletes were part of the entourage of both presidential candidates in 2004. Charlton Heston and Barbara Streisand are both as recognized for their association with political causes as for their movies. Entertainers such as Ronald Reagan and Arnold Schwarzenegger have used their celebrity to launch successful campaigns for public office. Rarely, however, have celebrities sought to use their fame as a platform to express themselves on the scientific aspects of controversial topics. And if they have, the public attention and impact have been negligible. With Michael Crichton, celebrity science has reached a new and disturbing level.

Novelists, filmmakers, and other entertainers are certainly free to incorporate scientific controversies into their work, and when done effectively, this can provide a valuable educational service. However, when celebrities are treated as scientific experts, the effect is to undermine public understanding. The interpretation and communication of complex scientific matters become simply another public relations game, in which celebrity substitutes for expertise.

Racing to the top

The United States is in a race to the top of a flat world. Will it win in this competition for the highest global standard of living? According to Thomas Friedman in The World is Flat, “It depends.”

The world is becoming flat through the convergence of three factors. First, new information technologies, networks, software, and standards are reducing costs that heretofore kept research, design, production, and jobs more or less rooted in a single place. Second, economic transformations in the face of these technological possibilities are uncoupling business processes and enabling remote collaboration along a spectrum of activities, from production to logistics to finance to research. Individually and together, these factors are radically changing what businesses do, how they do it, where they can do it, and whom they employ.

The third factor is more political. As technologies have created the possibility of separating tasks and engaging in remote collaborations, political change actually enables many additional people and firms in China, India, and the former Soviet Union to join in. The prospect of competing with many more people, particularly educated people who earn lower wages, is what is most frightening to business and labor in the United States.

Friedman provides numerous examples of this triple convergence from his extensive travels and contacts. For example, Rolls-Royce has wrapped the company’s core product—turbine blades from proprietary materials and processes—with R&D, products, and services from 50 countries to meet the demands of customers in 120 countries. Hewlett-Packard has linked its own 178 billing software systems (one for each country of operation) to create a new commercial product.

But Friedman’s view of the triple convergence understates some important features. If the world is flat for producers so that they can slice and dice production processes and outsource to the most advantageous location, then it also is flat for buyers. That is, the lower cost of producing customized products and services expands demand for them and thereby generates more buyers and leads to more jobs at all production locations.

This process of customization also quells Friedman’s concern that the flat world will lead to the homogenization of products and services. The “inefficiencies …of institutions, habits, cultures, and traditions that people cherish precisely because they reflect nonmarket values like social cohesion, religious faith, and national pride” will not have to disappear but will be perceived as market niches that now can be served by innovative companies in the flat world. Friedman sees a bright future for the “idea-based” workers who will create these products, but he might be overstating when he says that there “may be a limit to the number of good factory jobs in the world, but there is no limit to the number of idea-generated jobs in the world.”

Will the United States be able to maintain its rising standard of living by meeting the challenge and outpacing the newcomers in the race to the top of this newly flat world? Friedman’s prize is continued rising U.S. living standards, second to none. On the plus side, the United States enjoys the biggest and most daring marketplace, peerless and innovative university research complexes, an efficient and resilient financial system, and flexible labor markets. But Friedman soberly weaves anecdotes with data to illustrate three gaps—a numbers gap, an ambition gap, and an education gap—that hamper the United States in the living-standards race.

First, the numbers gap. The 21st-century production process for goods and services demands literacy, numeracy, and analytical skills throughout research, design, production, marketing, and sales. The U.S. educational system, however, is graduating fewer and fewer students with these abilities. Although U.S. universities continue to attract the world’s best students (albeit with some question as to whether this attractiveness will survive Homeland Security strictures), the triple convergence implies that foreign students can “work from home” in their native land when they graduate, and U.S. firms will find them there if they cannot find U.S. graduates with the right mix of skills.

Second, the ambition gap. Friedman musters fewer data here, but he uses a series of comments from interviews to paint a vivid picture of a generation of U.S. workers who feel entitled to good jobs and are thus very different from the scrambling, ambitious generation of workers in the countries opened up by the third convergence. Two quotes are especially telling. One is from an architect of information technology systems and teacher of computer science, who says, “Many Americans can’t believe they aren’t qualified for high-paying jobs; I call this the American Idol problem.” The second is from the chief executive officer (a U.S. citizen) of a London-headquartered multinational company, who says in discussing the (misguided) claim that lower costs are behind most outsourcing efforts, “When you think it’s only about wages, you can still hold your dignity, but the fact that they [in India] work better is awful.”

Friedman’s discussion of the third gap—education—amalgamates the lack of research funding in the United States for science, engineering, and technology; the poor performance of the nation’s K-12 students on international science and mathematics tests; and the deeper motivation among students and workers in countries opened up by the triple convergence. Although more data are brought to bear here, they reveal few new insights. A stricter focus on comparing national research funding and strategies for supporting science, engineering, and technology across countries would have made the description of this gap more distinct and compelling.

On balance, however, Friedman’s observations appear to be borne out by data on wages and incomes in the United States. Recently released data from the U.S. Census Bureau confirm that median household income has not increased in the past five years, the longest such stagnation ever. Whether this situation is because of numbers, ambition, or education, a greater share of the U.S. labor force appears to be experiencing the whirlwind of the triple convergence with stagnating living standards.

Although Friedman does not include it in his list, there is a fourth, and potentially more pernicious, gap: the political leadership gap. This gap is glaring because government has much to do with funding research in science and engineering, supporting graduate students, and demanding rigorous science and math standards for K-12 programs. More generally, political leaders need to explain to the public what is going on, craft a strategy to ensure U.S. success in the flat world, and then inspire the nation to action.

What is happening instead? Politicians actively deny that there is a problem, or worse, they blame others. According to Friedman, “Politicians in America today …seem to go out of their way actually to make their constituents stupid—encouraging them to believe that certain jobs are ‘American Jobs’ and can be protected from foreign competition, or that because America has always dominated economically in our lifetimes it always will. …It is hard to have an American national strategy for dealing with flatism if people won’t even acknowledge . . . the quiet crisis.” Moreover, he says, politicians fail to aid U.S. citizens as they seek to meet the triple convergence on their own.

The current political leadership receives particular opprobrium. “One of the most dangerous things that has happened to America since 9/11, under the Bush administration, is that we have gone from exporting hope to exporting fear. [He has driven] a wedge between America and its own history and identity [as the country that] looks forward not back [and that has] more dreams than memories.” Friedman also can be concrete: “Summoning all our energies and skills to produce a twenty-first-century fuel is George W. Bush’s opportunity to be both Nixon to China and JFK to the moon in one move. Unfortunately for America, it appears as though I will go to the moon before President Bush will go down this road.”

As to what is needed, Friedman offers this vision: “[T]o meet the challenge of flatism requires as comprehensive, energetic, and focused a response as did meeting the challenge of communism. It requires our own version of the New Frontier and Great Society adapted to the age of flatness. It requires a president who can summon the nation to get smarter and study harder in science, math, and engineering in order to reach the new frontiers of knowledge that the flat world is rapidly opening up and pushing out. And it requires a Great Society that commits our government to building the infrastructure, safety nets, and institutions that will help every American become more employable in an age when no one can be guaranteed lifetime employment.”

Friedman calls this “compassionate flatism.” Beyond requiring stronger political leadership, his vision involves muscle-building (increasing lifetime employability through such things as portable pensions and health care, and fostering lifelong learning through a funded and compulsory two years of tertiary or trade school); cushioning (employment-oriented programs, such as wage insurance); social prompting (consumer-led pressure on corporations to improve their global actions); and parenting (imparting values of education, hard work, and fair play).

Friedman is not a policy wonk and does not spend much time elaborating his policy program. One area, in particular, could use more attention: the fact that the triple convergence weighs on an increasingly large fraction of the white-collar workforce. Even workers who “have done everything right” (for example, by earning college degrees) will be laid off as their skills depreciate because of the triple convergence. Given the nation’s demographic situation, it is imperative that these workers remain gainfully employed through retirement. How might workers, corporations, and government collaborate to intervene? As one example, investment tax credits channeled through corporations to fund more training of the incumbent workforce could keep skills matched to corporate needs, with the training itself offered by accredited institutions.

Other topics also receive short shrift in the book. The chapter on developing countries points to poor governance and infrastructure as reasons why the triple convergence has yet to produce products and services customized to their domestic needs. Although the decision by local companies in some developing countries to focus on customized production for rich countries is mostly about profit, it is also likely that repatriated Indians know more about doing business with companies in Indiana than about doing business with companies in their native land. In addition, the chapter on companies accords them relatively little role in solving the “quiet crisis.”What about a more active pairing of companies with schools to tout the importance of science and math studies and the interesting jobs that these skills open up? Or retraining U.S. workers, perhaps with the money saved by paying more realistic chief executive officer salaries?

To pick other nits, the breezy, anecdote-packed style of The World is Flat will irritate some readers. Also annoying, Friedman has a habit of repeatedly interjecting pat phrases such as,“That’s a flat world for you!” or “Wow, the triple convergence at work here.” It also bothered me that a small number of interviewees appeared numerous times throughout the book. The world might be flat, but it has plenty of people to interview.

Yet this book is important. It provides a convincing argument that for the nation to achieve the highest living standards in the flat world, “it depends” on businesses, workers, educators, parents, and politicians all taking decisive, concerted action—starting now.

Stranger in a Strange Land

An increasingly common aspect of globalization is the movement of plant and animal species to places that they did not previously inhabit. This movement includes plants sold for use in the horticultural and landscaping industries; birds, fish, and reptiles sold as pets; and stowaways on vehicles that carry goods internationally. A large number of these new populations will simply die out. Others will establish a viable population but will not raise an ecological or economic fuss. Yet another fraction will find much to like in their new home, and their numbers will swell and catch our attention. The species that will really stand out, however, are those that are numerous and also possess traits harmful to people, industry, or native biodiversity. It is these species that we call “invasive.”

Many articles and books have been written in the popular press during the past decade or so about the growing number of instances in which nonnative species become invasive. The questions that confront ecologists and society in general concern the degree to which we should worry about this change in species distribution; and if we should worry, what we ought to do about it.

In Out of Eden, journalist Alan Burdick examines these questions in the process of studying several cases of biological invasion. He starts with a trip to Guam to investigate the impact of the brown tree snake on that island’s native species and people; then proceeds to Hawaii, with its cornucopia of non-natives; and ends with a series of mini-excursions into the study of marine invaders. Along the way, he encounters scientists and citizens with varying interests in understanding the invasion of nonnative species. He uses these encounters to explore several principles related to invasion ecology and to reveal the daily life of invasion ecologists. Burdick’s overall mission is to discover for himself what the changing distribution of species means for his understating of nature.“Was this a new kind of Nature, or the old kind gone amok?” he writes. “What in this rapidly changing world is Nature?”

Before I pass judgment on whether Burdick achieves his goals, I should confess to my preference for reading about science, rather than reading science per se. I never pick up a book about science expecting to learn the complexities inherent in the scientific task at hand. I get enough of that at work. I like my popular science the way David Quammen delivers it, seasoned with laugh-out-loud humor. In fact, I think it takes a writer with a journalist’s sensibility to convey the wonder of nature that comes from the often backbreaking and tedious work of doing ecological research. Burdick’s book delivers the goods in this respect in a far better way than I had expected.

BECAUSE IT IS IN THE NATURE OF POPULATIONS TO INCREASE SLOWLY AT FIRST, IT IS VERY DIFFICULT TO DECIDE WHETHER THE NEW SPECIES WILL TURN INTO THE EVILDOER YOU FEAR OR THE INTERESTING NEW NEIGHBOR YOU CAN TOLERATE AND PERHAPS ENJOY.

One of the features I liked about Burdick’s approach is that he goes well beyond the single viewpoint that all non-natives have negative effects and instead examines the changing perception of impact as the invader’s population grows and society responds in positive and negative ways. For example, Burdick took the time to read old newspaper accounts of the brown tree snake in Guam and found how difficult it is for societies to recognize the threat of invasive species. The brown tree snake has been an unqualified disaster for the people and native species of Guam. This Australasian native was introduced to Guam sometime after World War II, likely as a stowaway on military cargo. It is omnivorous and has thus managed to eat its way through a variety of native populations, including endemic passerine birds, driving some of the rarest species to extinction. It also bites a fair number of people every year (it is mildly poisonous), and it is responsible for frequent power outages. This has led to the development of elaborate and fascinating control and eradication efforts. Nevertheless, the early accounts of the snake by local journalists betray a lack of concern. Burdick quotes from a 1965 article in the Guam Daily News, written about 10 years after the snake’s initial release on the island: “Because they eat small pests and are not dangerous to man, they may be considered beneficial to the island.”

This view of invasive species is quite typical, even among invasion ecologists, because it is the nature of populations to increase slowly at first, even if the underlying dynamics herald an eventual explosion in numbers. Thus, even though some invasive species eventually cause serious ecological and economic harm, it is very difficult to decide whether the new species will turn into the evildoer you fear or the interesting new neighbor you can tolerate and perhaps enjoy.

Burdick also explores the perceptions of invasive species’ impact that are at the heart of how we relate to nature. Through interviews with citizens and biologists, Burdick documents an alarming disconnect between people and the natural world around them. Despite the average citizen’s interest in “wild” species, their curiosity is often quite satisfied through televised nature. This is the kind of nature in which lions interact peacefully with lambs and the wonder of the African Serengeti is only a channel-surf away. The end result is a kind of homogenized nature fantasy in which, Burdick writes, “every child dreams of crocodiles and lions at the expense of the bridled white-eyes and fruit bats in their backyard.”

Even if people do venture outside to interact with nature, they may find another barrier to perceiving the impact of non-native species. The more commerce serves to move species around, the more often the same set of species can be found everywhere. Burdick finds himself in exactly this situation when he flies to Hawaii “so eager to catch a glimpse of the unfamiliar Nature” and instead finds that he has succeeded “only in rediscovering my backyard.” Indeed, it is increasingly difficult to find a place far enough away from home, wherever that is, to see a big change in the set of species around you. This is one of the more subtle effects of the increasing prevalence of non-native species.

Good, bad, or both?

Burdick examines the conflicts that can arise between groups in society when an invasive species has both positive and negative aspects. He uses the example of a European pig introduced to Hawaii in the days of Captain James Cook. The pig was put on the island to provide later-arriving seamen with a source of meat. The pigs did quite well in Hawaii, and today can be found in forests on all the major islands. Those concerned with conserving Hawaii’s native plants and animals find the pigs problematic because they uproot native understory plants and create stagnant water bodies where invasive mosquitoes breed and then infest native birds with avian pox and malaria. Many native Hawaiians, however, rely on pigs as a source of dietary protein and consider hunting a part of their cultural heritage. The ensuing clashes between the two groups have sparked some interesting political fireworks. Burdick spends some time with Hawaiian hunters in order to understand their point of view and comes away with one of the more entertaining passages in the book: “Suffice to say, I had not been in Hawaii very long before I understood that the phrase alien species, used loosely and in the wrong company, might be hazardous to one’s health.”

Probably the most satisfying elements of Burdick’s book for me were his in-depth descriptions of ecological research and fascinating portraits of various invasion ecologists at work in the field. Burdick reveals the true nature of field ecology: dirty, tedious, and most certainly unsexy. I’ve encountered a lot of undergraduates whose knowledge of the ecologist’s life comes only from televised nature. I wish I could make them read this book and then come to me with ideas about what they would like to do for their senior thesis. We might get a little further a little faster.

I have two issues with Out of Eden that deserve some attention. First, I found a few factual mistakes that were understandable but distressing. For example, Burdick lists the Akohekohe, a Hawaiian honeycreeper, as extinct. Yet the bird is quite alive and kicking on Maui, although still threatened with extinction. Burdick likely used an older (mid-1980s) guide to the birds of Hawaii that erroneously considered the species extinct. This example and other such instances are not cause for great concern because none of the misstated facts play a key role in the arguments he is making. My second gripe is that Burdick occasionally overplays the metaphors he uses to describe complex ecological phenomena. Sometimes I found these metaphors charming and helpful. For example, Burdick compares the influence of non-native species on nutrient and carbon cycling through ecosystems with problems in finance and banking, with invasives “embezzling” and “laundering” ecological funds into foreign accounts that are held for use by non-native species only. On other occasions, I found myself grimacing a bit and quickly moving on.

Those faults aside, the achievement of Out of Eden lies in its multifaceted view of biological invasions. It covers a great deal of intellectual ground and does so in a compelling manner. I even laughed out loud a few times. Importantly, Burdick does not lose sight of his overall mission to search for what constitutes nature and how non-native species “fit” into this concept. This approach, I think, arms Burdick with a perspective that most recent writers on invasion ecology lack. He is not out to win you over to the side of “all nonnative species are bad” by reviewing how harmful some of them are for native species and societies. Instead, he seeks a more fundamental truth and has thus produced a fascinating review of how society and nature are entwined in today’s global community.

In keeping with this theme, Burdick does not end the book on what would be a probably false optimistic note. He instead takes us to the National Aeronautics and Space Administration’s (NASA’s) Jet Propulsion Laboratory in Pasadena, California, and introduces us to the prospect of exporting life to other planets. I had no idea that this was an issue, but Burdick provides another laugh for me in describing NASA’s past reliance on household cleaning methods in its efforts to keep microbes and other tiny life forms from shooting into space onboard one of its spacecraft; “The last thing Mars scientists want to discover is that Martians are the evolutionary descendants of Q-tips,” he writes. The realization of the Mars scientists’ nightmare may be nature’s last laugh as well.

Is the Next Economy Taking Shape?

Recent economic trends, including a massive trade deficit, declining median incomes, and relatively weak job growth, have been, to say the least, somewhat disheartening. But there is one bright spot: strong productivity growth. Starting in the mid-1990s, productivity has rebounded after 20 years of relatively poor performance. Why has productivity grown so much? Why did it fall so suddenly in the 1970s and 80s? Is this latest surge likely to last? Understanding the answers to these questions goes to heart of understanding the prospects for future U.S. prosperity

Unfortunately, economists have provided few answers, largely because conventional neoclassical growth models ignore technological innovation. In contrast, a “neo-Schumpetarian” analysis suggests that the revival and stagnation of productivity are tied to the emergence and subsequent exhaustion of new techno-economic production systems. When an old economy reaches its limits in terms of innovation and the diffusion of the technology system, it becomes increasingly difficult to eke out productivity gains. Only when a new technology system becomes affordable enough and pervasive enough is it able to revitalize the engine of productivity. This analysis suggests that although the current information technology (IT)–based technology system is likely to continue to drive strong productivity growth for at least another decade, an innovation-exhaustion slowdown may be just over the horizon.

The old mass-production corporate economy emerged after World War II and prospered until the early 1970s. This was indeed a golden age, during which labor productivity grew on average 3% per year and real family incomes boomed (30% during the 1960s). Yet, starting in 1973, labor productivity growth fell precipitously to about 1.3% per year and income growth stagnated. Between 1973 and 1980, average family income did not grow at all, and it increased just 9% from 1981 to 1996.

Why, to borrow a phrase coined by economists Barry Bluestone and Bennett Harrison, did this “great U-turn” happen? Economists struggled to find answers, postulating that factors such as energy prices, interest rates, and taxes contributed to the decline. Economist Alan Dennison conducted the most comprehensive analysis of the productivity slowdown and concluded that collectively these kinds of factors could explain at best only 40% of the productivity slowdown. The remaining 60% was a mystery.

To this day, economists are not quite sure what happened. Alan Blinder, former vice chairman of the board of governors of the Federal Reserve System, states, “No one quite knows why productivity growth slowed down so much, although many partial explanations—higher energy costs, lagging investment, and deterioration in the skills of the average worker—have been offered.” Economist and columnist Paul Krugman confesses, “We do not really know why productivity growth ground to a near halt. Unfortunately, that makes it hard to answer the other question. What can we do to speed it up?”

It’s the technology, stupid

Economists had difficulty determining the causes because they were not looking at changes in the underlying technological production system and how technologies changed. However, when viewed through the lens of economic cycles, the puzzle of falling productivity begins to make sense. In its heyday, the mass-production corporate economy was able to take advantage of a number of key innovations in technology, scale economies, and the organization of enterprises to create significant new efficiencies. Numerous production innovations, including automated assembly lines, numerically controlled machine tools, automated process control systems, and mechanical handling systems, drove down prices in U.S. manufacturing and led to the production of a cornucopia of inexpensive manufactured consumer goods. In fact, the rise of mechanical automation was the truly great development of that era’s economy. The term “automation” was not even coined until 1945, when the engineering division of Ford used it to describe the operations of its new transfer machines that mechanically unloaded stampings from the body presses and positioned them in front of machine tools. But automation was not confined to autos, discrete parts, and other durable goods industries; it became widespread in commodity processing. Continuous-flow innovations date back to 1939, when Standard Oil of New Jersey created the first of the industry’s great fluid crackers. In these plants, raw material flowed continuously in at one end and finished product emerged at the other end.

Even if all establishments adopted the core technologies, productivity could still keep growing if technologies continued to get better. Indeed, this is what happened for many years from the end of World War II until the early 1970s. But by the late 1970s, the dominant electro-mechanical technological path was exhausted, and further gains came with increasing difficulty. Engineers, managers, and others who organize production had wrung all the efficiencies out from both achieving increased scale economies and fully using the existing technological system. Over time, virtually all enterprises had adopted the new technologies and ways of organizing work and firms; most manufacturers used assembly lines, most chemical companies adopted continuous flow processing, and most companies sold through department stores.

WHEN AN OLD ECONOMY REACHES ITS LIMITS IN TERMS OF INNOVATION AND THE DIFFUSION OF THE TECHNOLOGY SYSTEM, IT BECOMES INCREASINGLY DIFFICULT TO EKE OUT PRODUCTIVITY GAINS.

This trend can be seen in a number of industries. In banking, for example, the limits to mechanical check-reading became apparent. In the early 1950s, IBM invented an automatic check-reading and sorting machine for use by banks. Every few years, they and other producers would come out with a better and somewhat cheaper machine that would process checks just a little faster and with fewer errors. But by the early 1980s, the improvements slowed because it was physically possible to move paper only so fast. At that point, efficiency gains were more difficult to achieve. The same trend can be observed in the auto industry. Numerically controlled machine tools and other mechanically based metalworking tools could not be made much more efficient. As a result, auto sector productivity growth declined from 3.8% per year from 1960 to 1975, to 2.2% from 1976 to 1995.

By the end of the 1970s, the only way to regain robust productivity growth rates was for the production system to get on a new S-curve path based on a new set of core technologies. Even though industry leaders recognized at the time that IT would be the basis of the new technology system, the transition would not happen overnight. Even as late as the early 1990s, this emerging IT-based techno-economic system was not well enough developed, was too expensive, and was too limited to exert a noticeable economywide effect on productivity and economic growth.

This was why, by the early 1990s, many economists began to question whether the new IT system was in fact going to be the savior of productivity. Emblematic of their doubts, Nobel Prize–winning economist Robert Solow famously quipped, “We see computers everywhere except in the productivity statistics.” This conundrum—rapid developments in IT but no rapid growth in productivity—was labeled the “productivity paradox.” Because productivity growth had lagged since the 1960s while investments in IT grew, some concluded that IT did not affect productivity. For example, Bluestone and Harrison argued that, “The first Intel chip, produced in late 1971, was capable of processing about 60,000 instructions per second. The latest, introduced in 1998, is capable of 300 million. Yet over that same period, productivity nose dived.”

In fact, IT was actually boosting productivity, but only in particular sectors. Since the 1970s, productivity grew 1.1% per year in sectors investing heavily in computers and approximately 0.35% in sectors investing less. Between 1989 and 2001, productivity growth in IT-intensive industries averaged 3.03% per year, compared to only 0.42% per year in less–IT-intensive industries.

Why were computers not showing up in the overall productivity statistics? In making an analogy to the adoption of electric motors, Stanford University historian Paul David advanced the most widely cited explanation, claming that it simply takes a long time to learn how to use new technology. He pointed out that it took over 30 years for electric motors to be fully utilized by factories after they were first developed in the early 1900s, so we should not be surprised that it takes a long time for companies to figure out how best to use these technologies and reorganize their production systems. In contrast to the IT skeptics, David counseled patience.

Although David’s “learning” hypothesis seems reasonable, it suffers from two key problems. First, these technologies are actually not hard to learn. In fact, with “Windows” functionality, off-the-shelf software, and the easy-to-use Internet, information technologies are relatively easy for companies and people to adopt and use. Second, David’s story about learning suggests that technologies come on the scene fully formed and that it takes years for recalcitrant organizations to finally adopt them and figure out how to use them. Yet electric motor technology took more than 25 years to increase power output, functionality, versatility, and ease of use to get to the point where it was widely used and had a big impact. For example, in the 1920s, companies developed multivoltage motors, push-button induction motors, and smoother-running motors using ball bearings. In the 1930s, companies developed motors for low-speed high-torque applications and motors with variable-speed transmissions.

IT has followed a similar development trend. Compared to today, IT of even the early 1990s seems antiquated. The first popular Microsoft Windows platform (3.0) was not shipped until 1990, and even this was nowhere near as easy to use as Windows95. Pentium computer chips were not introduced until 1993. The average disk drive storage was 2 gigabits. Few machines were networked, and before the mid1990s, there was no functional World Wide Web.

One way of understanding how far IT has come is to realize that computer storage has become so cheap that companies give it away. For example, Google recently launched a free Web mail service called GMail that gives users more than 2.6 gigabytes of free memory. If Google were to use 1975 storage technology, it would cost them over $42 million in today’s dollars to provide me with that much memory. In short, until the mid-1990s most Americans were working on Ford Model Ts, not Ford Explorers.

Yet compared to the original Apple 2 computer with no hard drive and 560 kilobytes of memory, the machines of the early 1990s looked pretty impressive. Most economists, who found desktop computers to be extremely useful in their own work, could not understand why this marvelous device was not leading to gains in productivity. They could not have anticipated that 10 years later these remarkable computers would not be good enough to donate to an elementary school.

In short, the skeptics were expecting too much too soon, and when the miracle did not happen, they questioned the entire IT enterprise. In arguing that policymakers should not look to IT to boost incomes, Bluestone and Harrison wrote in 2000 that the information age was in its fourth decade and had yet to show returns. The reality is that the IT revolution is only in its second decade, and all the prior activity was merely a warm-up.

The 1990s productivity puzzle

No sooner did the idea of the productivity paradox become widely accepted than events overtook it. Between the fourth quarter of 1996 and the fourth quarter of 2004, productivity growth averaged more than 3.3% per year, which was almost three times as fast as during the stagnant transition period.

What happened? The reason why productivity rebounded and continues its solid performance is that by the mid1990s, the new IT system was affordable enough, powerful enough, and networked enough to open up a new set of productivity possibilities that could be tapped by a wide array of organizations. IT was particularly crucial in helping to boost productivity in the non-goods sector. Between 1973 and 1996, service sector productivity grew less than 0.4% per year. In an economy in which more than 80% of jobs are in the non-goods sector, even fast productivity growth in the goods sector is no longer enough to pull the overall economy along.

Yet until the mid-1990s, it was quite difficult for companies to automate processes such as phone calls, handling of paper forms, and personal face-to-face service. However, as it developed, IT provided the ability to vastly improve efficiency and productivity in services. Just as mechanization let companies automate manufacturing, digitization is enabling organizations to automate a whole host of processes, including paper, in-person, and telephone transactions. For example, check processing might have reached its productivity ceiling, but the new technology system of electronic bill payment does away with checks completely and lets banks once again ride the wave of significant productivity gains. Likewise in the auto industry, the new technology and organization systems allowed carmakers to control parts delivery in real-time systems, design cars on computers, machine parts with computerized numerically controlled machines, and do a host of other things to boost efficiency. The result was that auto productivity growth surged to 3.7% per year in the last half of the 1990s. This opening up of new technology has had a similar transformative effect on a host of industries.

Although many IT skeptics grudgingly acknowledge that IT helped boost productivity, it is fashionable now to argue that this trend is over. Business professors Charles Fine and Daniel Raff argue that with regard to the auto industry that IT provides only “a one-shot improvement in forecasting, communication and coordination.” Morgan Stanley chief economist Stephen Roach agrees that IT yields a one-time productivity gain, after which stagnation will set in. Harvard Business School’s Nicholas Carr in an article entitled “IT Doesn’t Matter” concludes that “As for IT-spurred industry transformations, most of the ones that are going to happen have likely already happened or are in the process of happening.”

I believe that just as the skeptics were wrong in the 1990s, they are wrong now. Although it is true that the adoption of technologies has sometimes produced one-time productivity gains for the adopters, as technologies diffuse to other adopters, productivity kept going up. Moreover, it is not as if technologies do not improve. In these and myriad other ways, “one-time” gains become continuous gains, at least until the technology system is mature and fully diffused. This is why the Institute for Supply Management recently found that 47% of manufacturing companies and 39% of nonmanufacturing companies believe they have achieved less than half the efficiency gains available from existing technology.

In order to achieve the full promise of the digital revolution, at least four things will have to happen to the technology system. First, the technology will need to be easier to use and more reliable. Americans do not think twice about plugging in an appliance because they know it will work. But in spite of considerable efforts to make them easier to use, most digital technologies remain complicated and less than fully reliable. Technologies will need to be so easy to use that they fade into the background. Luckily, the IT industry is working on this challenge, and each new generation of technology is getting closer to the ideal of “plug and play.”

Second, a variety of devices will need to be linked together. Although it is entertaining to watch a corporate road warrior in an airport security line juggling cell phone, laptop, Blackberry, and personal digital assistant, this is a diversion that needs to stop. At home, it is just as bad, with stereos, televisions, phones, laptops, desktops, printers, peripherals, and MP3 players existing in unconnected parallel digital universes. Moreover, an array of new devices such as smart cards, e-book readers, and ubiquitous sensors will need to be integrated into daily life and existing information systems. The IT and consumer electronics industries are well aware of the problem and are pushing toward convergence and integration.

Third, improved technologies are needed. A recent National Institute of Standards and Technology report articulated a number of cross-cutting generic technology needs in areas such as monitoring and control of large networks, distributed databases, data management, systems management, and systems integration. Other technologies, such as better voice, handwriting, and optical recognition features, would allow humans to interact more easily with computers. Better intelligent agents that routinely filter and retrieve information based on user preferences would make the Internet experience better. Expert system software would help in making decisions in medicine, engineering, finance, and other fields. Again, these improvements are being made. For example, Internet2, a consortium that includes more than 180 universities working with industry and government, is working to develop advanced network applications and technologies, accelerating the creation of tomorrow’s dramatically more powerful Internet.

Finally, we need more ubiquitous adoption. When roughly 75% of households are online, including 50% with true high-speed broadband connections, and 50% are using key applications such as electronic bill payment, a critical inflection point will occur. At that point the cyber world will begin to dominate, whereas now both activities—cyber business and traditional business—exist in parallel worlds. It is not just online ubiquity that we need; IT will need to be applied to all things we want to do, so that every industry and economic function that can employ digital technologies does. Government, health care, transportation, and many retail functions such as the purchase of homes and cars are some of the industries that lag behind.

In one sense, however, the skeptics are right. If past transformations provide a roadmap, although the productivity gains from today’s IT-driven economy should continue for at least another decade or so, they will not last forever. Most organizations will adopt the technology and the digital economy will simply be the economy. Moreover, the pace of innovation in the IT sector may eventually hit a wall. Indeed, many experts suggest that by 2015 the breakneck rate of progress in computer chip technology that has become known as Moore’s law will come to an end.

In Isaac Asimov’s Foundation series, the secret foundations’ mission is to reduce the length of a galactic dark age by accelerating the reemergence of a new Empire; in that case, based on microminiature technologies. Although the United States will not face a 1,000-year galactic dark age, it might face a 10- to 20-year period of slow growth, precisely at the time when it will need that growth more than ever: when baby boomers go from being producers to consumers. This suggests that the nation needs to think about what kind of technology system will power growth 20 to 25 years from now and to consider what steps, if any, might accelerate its development. In the 1960s, no one predicted the slowdown that was to come just a decade later. If they had, perhaps they could have stepped up efforts to accelerate the IT revolution.

Which technologies will form the core of the next wave is not yet clear, but it seems likely that one will be based on nanoscale advances, whether in pharmaceuticals, materials, manufacturing, or energy. Another could relate to the key need to boost productivity in human-service functions. Boosting productivity in human-service occupations is difficult, but technology can play some role. For example, as Asimov has speculated, robots could play an important role in the next economy, perhaps by helping care for the elderly at home.

Although Congress and the administration have expanded research funding for biological sciences and established the National Nanotech Initiative, more needs to be done. One first step is to reverse the decline in research funding that the administration projects for the next three to four years. Another step would be to ask an organization such as the National Academy of Sciences to examine what the likely technology drivers will be by the year 2030 and what steps government and industry could take in the next decade to accelerate their development.

Harvard University economist Derek Scherer has noted that: “There is a centuries’ old tradition of gazing with wonder at recent technological achievements, surveying the difficulties that seem to thwart further improvements, and concluding that the most important inventions have been made and that it will be much more difficult to achieve comparable rates of advance. Such views have always been wrong in the past, and there is no reason to believe that they will be any more valid in the foreseeable future.” Such pessimism is especially misplaced now, given that we are in the middle of a technology-driven surge in productivity and can expect perhaps as many as two decades of robust growth until the current techno-economic system is fully utilized. Schumpeter got it right when he stated, “There is no reason to expect slackening of the rate of output through exhaustion of technological possibilities.” The challenge now is to make sure policymakers take the steps needed not only to advance the digital economy but also to put in place the conditions for the emergence of the next economy and its accompanying technology system.

A Forgotten Model for Purposeful Science

Toward the end of Richard Nixon’s first term as president, his Republican administration forced on a reluctant National Science Foundation (NSF) a major research program that looked like something out of a New Deal social laboratory. Research Applied to National Needs (RANN) was more ambitious than any program NSF had undertaken before or has undertaken since. Between 1971 and 1977, it spent almost a half billion dollars to fund research projects that were far-reaching, innovative, and targeted. Ironically, Jimmy Carter’s Democratic administration, with Frank Press as science adviser and Richard Atkinson as NSF director, killed it without regret.

RANN’s Republican origin was an obvious political paradox but so too was its ending. By killing the program, the U.S. science establishment spurned an opportunity to demonstrate the power of directed research to make life better for Americans and establish science as a “city upon a hill”: a merger of science with societal aspiration. In hindsight, the Democrats appear to have squandered an opportunity to raise science to higher esteem in the public eye.

At NSF, however, RANN’s demise was received with a sigh of relief and a touch of “good riddance.” Not only was the program eliminated, but except for mention in NSF histories, it was all but obliterated as an idea worth remembering. Still, the basic RANN concept never did die entirely, because from time to time since then calls have come forth—many in the pages of this journal—for a new contract between science and society. Whether described as “strategic research” or “Jeffersonian science,” or “Pasteur’s quadrant,” these calls have shared the identical goals of enhancing connective streams between basic research and the unsolved, overlooked, or lingering problems facing technology, industry, and society. Unfortunately, so officially discredited was RANN that none of these appeals referred back to the program as an approach that was once tried and might by chance be something to learn from.

We believe that a modernized version of RANN could be what the country needs and might enable science to contribute more to the common good. Cries for a revived federal science and technology policy are hardly lacking today, especially in view of the sturm und drang in specific areas such as stem cell research, the teaching of evolution in schools, global warming, energy shortages, health care inefficiencies and inequities, weakness in the country’s critical infrastructure, various social programs said to be suffering from budget cuts, the collapse of the industrial pension system, the marginalizing of the White House science advisory role, and the stacking of science panels with political ideologues. What has been lost, according to critics, is the goal of foresight and objectivity in establishing priorities, resulting in fears that when science advice is unsought, devalued, or restricted to interests dominated by the marketplace, only a degraded and forfeited future can be in prospect.

RANN’s purpose was to manage a set of research projects specifically targeted at improving various social, economic, industrial, and intergovernmental sectors of the country: problems in energy, the environment, industrial innovation, urban and rural quality of life, the criminal justice system, medical delivery systems, the management of cities, communication needs across the board, transportation, the country’s infrastructure, analysis of complex policy issues, and quite a bit more.

RANN scanned the terrain in search of existing and emerging problems, assembled them into categories, and asked the research community—academic and industrial—to submit proposals for research on meeting goals that fell under each grouping; goals that the federal mission agencies either missed or lacked the resources to tackle. The program was essentially an idea factory that depended on researchers to embroider those ideas, reshape them, take resulting projects to the proof-of-concept stage, and once they achieved promise, transfer them to industry, a mission agency, or to state or local governments so that they could be put to practical use.

The main problem, however, was that RANN was never able to embed itself in the value system of the basic research establishment itself, much less in the inner structure and mentality of its agency. NSF was founded on the assumption that its sole mission was to support basic research. Accordingly, most of its senior staff resented any encroachment on that sacred trust by anything that reeked of applications. Too much applied research, it was believed, would only crowd out university funding for basic research, felt to be eternally in short supply, and do little more than water down academic excellence.

Still, the program did run long enough to leave a record, even a legacy in the form of programs funded by RANN that exist today. Using science and technology as a resource for state and local governments in their policy planning and in program development and execution was a RANN initiative. Interdisciplinary research, now so common in NSF programs, got its start at RANN. Cooperative ventures among government, universities, and industry, also commonplace today, began under RANN. The Small Business Innovation Research (SBIR) program was devised by RANN. Much of today’s fire research program at the National Institute of Standards and Technology was initially inspired by RANN fire projects. RANN’s solar energy program formed the basis for renewable energy research at today’s Department of Energy. Its environmental projects on trace metal contaminants and estuaries at risk preceded work that was moved to the Environmental Protection Agency. It did early work on the technique of technology assessment and was thus a precursor to Congress’s now-defunct Office of Technology Assessment.

An unlikely lineage

Why RANN was a Republican invention is a quirk of political history. But it did make sense at the time. The administration was eager to soothe an electorate split over the war in Vietnam, worn down by the urban and racial violence of the 1960s, and staggered by the wrenching assassinations of Martin Luther King and Robert F. Kennedy. Unemployment was a worry; the space program was sputtering because the end of the manned expeditions to the Moon had put thousands of engineers out of work. Most of the country hungered for stability against groups of activists clamoring for radical change. New ideas were being sought to improve a stagnating economy and overhaul outmoded institutions. (“If we can send a man to the Moon, why can’t we…?) And the Middle East was nearing the brink of another Arab-Israeli war, which would ultimately lead to the crippling 1973 oil embargo. Moreover, the 1972 election was looming, and Nixon’s staff was eager to snatch any ideas they could to animate that “lift of a driving dream” slogan that speechwriters inflicted on the president for a brief time.

NSF had already had a launch platform, as it were, for RANN in the form of a small program called Interdisciplinary Research Relevant to Problems of Society, which was established in 1968, after Congress passed an act that amended NSF’s charter to include research in social science, applied science, and engineering. The program was NSF’s first attempt at combining engineering and the physical and social sciences in single research projects.

Sensing that the National Science Board (NSF’s board of directors) might be reluctant to launch something so large and disruptive, George Shultz, head of the Office of Management and Budget and no stranger to academic science, went before the board to tell its members how much the administration desired RANN. He told the board that unless it supported RANN, any increase in the NSF budget for the coming fiscal year would be denied by the administration. RANN soon became a reality.

RANN’s first director was Alfred J. Eggers, a National Aeronautics and Space Administration (NASA) engineer and administrator who was known for his design work on the lifting body concept so central to the landing of wingless vehicles from space, and for a certain zeal for applying to civilian use technological developments from the U.S. space program. NSF director William D. McElroy threw his own energies into the RANN cause against a skeptical Congress and continuing resistance from NSF’s senior staff and some of the top members of the country’s science establishment. McElroy left NSF about a year after RANN’s founding and was succeeded by H. Guyford Stever, who also took a liking to RANN. At the White House, Nixon’s science adviser Edward E. David established an interagency committee to ensure that RANN efforts were being coordinated with the interests and wishes of wary mission agencies, always protective of their own turf.

Advanced Technology Applications

Earthquake engineering, fire research, socioeconomic response to natural hazards, and technological opportunities

Energy Research and Technology

Solar energy, geothermal energy, energy conversion and storage, energy systems, energy resources, advanced automotive propulsion, and energy and fuel transportation

Environmental Systems and Resources

Environmental effects of energy, regional environmental systems, environmental aspects of trace contaminants, and weather modification

Social Systems and Human Resources

Municipal systems and services, human resources and services, social data and evaluation, and public regulation and economic productivity

Exploratory Research and Problems Assessment

Technology assessment, selected research topics, and new problems and projects

Intergovernmental Science

Strengthen the capability of state and local governments to integrate science and technology into their policy formulation and daily operations

RANN’s operations had to be kept flexible and imaginative. Change was part of its culture. The organization was constantly fine-tuning itself organizationally as projects ended and others began. RANN was essentially an agency within an agency, and its structure, spirit, and NASA-like go-go way of operating unnerved many of NSF’s core staffers, who bridled at the discipline of schedules and milestones.

Richard Atkinson replaced Stever as NSF director in May 1977, soon after Carter was elected president, and from the start he showed little friendship toward RANN. He created an agency task force (which RANN supporters believe was rigged against the program) to conduct a review and evaluation. The task force said RANN was a distraction and an impediment to NSF’s primary role of supporting undirected research in basic and applied science and engineering in universities. The RANN program, though less than 10% of the NSF budget, was viewed as a threat to the status quo. The task force recommended ending the program, and by September RANN was shut down.

The Democratic Congress did nothing to save what was seen basically as a Republican initiative. Few of the mainstream staff at NSF regretted the loss and for years sustained the fantasy that RANN was a badly run bad idea managed by people who fell outside the NSF culture. Eventually, RANN was either forgotten or dismissed with specious derision.

Atkinson may have succeeded in killing RANN, but NSF could never escape the pressure from Congress to demonstrate how the basic research it funded was benefiting the country. The agency’s engineering programs were grounded in practicality, but gone was the conscious, directed attempt by NSF to seek out specific societal problems and train the resources of portions of the research world on their solution. Instead, application at NSF became indirect, in the form of increasingly close relations between the agency and industry. Engineering programs and new university innovation centers grew in fields such as biotechnology, information technology, and more recently nanotechnology. These efforts have grown to the point where they now essentially determine a significant portion of NSF’s basic research agenda. In their funding approach, they reflect not so much the RANN approach as the “Jeffersonian science” technique of simply pouring research money into a technology that is assumed, in an unspecified way, to be substantially useful.

NSF’s three technology initiatives—bio, info, and nano— could become more RANN-like if the agency were to adopt RANN’s “hot pursuit” concept. Under that idea, RANN analysts would examine various research outcomes and determine those with high application potential. Those with the most promising economic or social potential would be chosen for hot pursuit of some application, and when preliminary investigation proved successful, they would be transferred to an appropriate user organization. Unsuccessful initiatives would simply be dropped.

The point is that the need for new programmatic ideas is never exhausted. One can note how in the wake of the 2005 Katrina hurricane disaster, the state of the poor and extent of their problems in New Orleans suddenly became visible to the rest of the country. The poor have seldom been targets of serious assessment by the scientific community. A RANN program would assess their needs and look for research that might be helpful—ranging from assessments of information services such as the Internet, to new approaches to housing and other infrastructure needs, to educational innovations, to family assistance requirements. RANN would identify problems; take a first cut at identifying relevant research; and call on researchers, organizations, and interest groups to help in the planning of applied research initiatives.

A redesigned RANN would heavily involve the social sciences to define issues of individual and community concern, from the needs of children and families to the optimization of the lives of the elderly. It would expand its interests in technological innovation and perhaps model its own approaches partly on existing programs at the Defense Advanced Research Projects Agency (DARPA) and the Advanced Technology Program (ATP) at the National Institute of Standards and Technology.

Both programs are reminiscent of RANN in the targeted approach they take toward innovation. DARPA’S work, which preceded RANN’s establishment by 13 years, is to sponsor high-payoff research that bridges the gap between fundamental discoveries and their military use. It is the military’s technological change agent, with a program marked by opportunism and flexibility. The concept of a “civilian DARPA” has been discussed in science policy circles since the 1970s, when the government, alarmed by Japan’s rise as a technological superpower, began seeking ways of spurring innovation in U.S. industry. A civilian DARPA, it was thought, would upgrade the government’s expertise by adopting an active, strategic approach to meeting present and future national needs.

A DARPA approach toward innovation in energy was in fact proposed in the report Rising above the Gathering Storm issued by the National Academies in October 2005.“The new agency,” the report declared in what is in fact a functional definition of a new RANN, “would sponsor creative, out-of-the-box, transformational, generic energy research in those areas where industry by itself cannot or will not undertake such sponsorship, where risk and potential payoff are high, but where success could provide dramatic benefits for the nation. ARPA-E [Advanced Research Projects Agency-Energy] would accelerate the process by which research is transformed to address economic, environmental, and security issues. It would be designed as a lean, effective, agile—but independent—organization that can start and stop targeted programs based on performance and ultimate relevance.” The applicability of that approach to other national needs is obvious.

These thoughts are merely suggestive of what a new RANN might assemble. The shape would be contingent on the chosen needs. Suffice it to say that a new RANN would have the charter to range far and wide. A primary responsibility would be problem selection and definition. Once the program plans were in place, RANN would make awards to carry out the required problem-focused research. Awardees would run the gamut of organizations, including academe, government, nonprofits, industry, and other qualified performers. Under this scheme, RANN would be a unitary, self-contained organization to develop solutions to national problems and hand them off to the appropriate users.

The ATP model would be equally relevant to a redesigned RANN. ATP was established as a direct outgrowth of concerns over Japan two decades ago. Its purpose is to serve as a venture capital source of sorts to companies that want to explore risky pre-market technologies but need some financial help to do it. In fact, the seeds of ATP were sown in the SBIR program that was established as a RANN venture. The grants that ATP gives to corporations must be matched by them dollar for dollar. ATP support for a company usually ends within three years (five years for consortia) on the assumption that by then any idea should be either discarded or taken fully over by the company receiving the support. The same procedure would hold true for the new RANN, including the practice of cost sharing.

Obviously, any new RANN proposal would run up against today’s Republican market-oriented politics and distrust of government. A new RANN along the lines drawn here would without question require a transformation in attitudes about government’s role in economic and social contexts. Thus, any revival might well have to wait for changes in public opinion and a transfer of political power. We believe, however, that the public is ready to hear the message of RANN and that the research community would serve itself well by reassessing its own public responsibilities.

Science policy today is a passive activity, especially because the market is seen as the center of national governance. The market, although it works well in meeting and stimulating public wants, nevertheless does a poor job of reflecting public needs. The unanswered question is whether any administration would be willing to give a new agency the resources, independence, and flexibility necessary to do the job right. The will to act will come when voters focus on the enormous potential latent in government-funded science and begin to ask those ever-fresh questions: Why not, and what if?

From the Hill – Winter 2006

White House unveils pandemic flu plan

In a November 1 speech at the National Institutes of Health (NIH), President Bush proposed a multiyear plan to address the growing global threat of an avian flu pandemic. The plan includes an initial investment of $7.1 billion in emergency spending in fiscal year 2006, which is provoking some resistance in Congress.

In the speech, the president said his plan is designed to meet three critical objectives. “First,” he said, “we must detect outbreaks that occur anywhere in the world; second, we must protect the American people by stockpiling vaccines and antiviral drugs, and improve our ability to rapidly produce new vaccines against a pandemic strain; and, third, we must be ready to respond at the federal, state and local levels in the event that a pandemic reaches our shores.”

The $7.1 billion emergency spending would include:

  • $251 million to assist other countries in training personnel and to develop surveillance and testing systems that will allow for early detection and containment of avian flu outbreaks
  • $1.5 billion for the Departments of Health and Human Services (HHS) and Defense to purchase an experimental influenza vaccine based on the current H5N1 strain that is spreading throughout Asia
  • $1 billion to stockpile antiviral medications
  • $644 million for local government response
  • $2.8 billion to accelerate development of cell-culture technology to allow manufacturers to move away from the current egg-based technique for creating flu vaccines
  • $800 million for R&D of other novel treatments

The president said he would like to eventually achieve a goal of producing 300 million doses of vaccine within 6 months of a pandemic outbreak.

In testimony before the House Labor-HHS Appropriations Subcommittee and the Energy and Commerce Committee, HHS Secretary Michael Leavitt said that the experimental vaccine for H5N1 must be given in two doses to be effective. He estimates that by 2009, the United States would be able to produce about 40 million doses using existing production techniques, enough for 20 million individuals.

This pre-pandemic vaccine would be used to immunize health care workers and other first responders. However, Leavitt emphasized that H5N1 would have mutated by the time it could be easily transmitted between humans, thus making today’s vaccine less effective. With the current egg-based production techniques, it would take six months from a pandemic outbreak to research, test, and produce 60 million courses of a new vaccine based on the current mutated strain.

Anthony Fauci, director of NIH’s National Institutes of Allergies and Infectious Diseases, said at the hearings that investment in cell-based technologies would allow the United States to develop the “surge capacity” needed to produce 80% of the targeted goal of 300 million vaccine doses within 6 months of a pandemic outbreak.

The high price tag of the new plan and uncertainty about how imminent an influenza pandemic is have lead to some congressional skepticism. Although acknowledging the importance of addressing an avian flu pandemic, House Energy and Commerce Committee Chairman Joe Barton (R-TX) said that, “Wasting taxpayers’ money will not keep people from catching the flu.. . . We need to sort out our real weaknesses from our imagined ones, and determine where the application of money and good sense will actually improve our preparedness and stop the flu.”

Meanwhile, other members of the committees questioned the administration’s plan to give industry liability protection without considering recourse options for patients in the event of adverse reactions to a vaccine. Rep. Dave Weldon (R-FL) said that although ensuring industry participation is important, government policy must also seek to encourage patients to submit to a vaccine that may carry unknown risk.

Senate delays action on bill to ease stem cell research restrictions

A Senate bill that would ease current restrictions on federal funding of embryonic stem cell research has been postponed until 2006. However, consideration of the bill will be given a high priority early in the next session, according to an agreement worked out by Majority Leader Bill Frist (R-TN) and Sen. Arlen Specter (R-PA). Specter had threatened to attach Stem Cell Research Enhancement Act (S. 471) to the Labor-HHS appropriations bill to force consideration of it in 2005.

To keep the issue of stem cell research funding in the public eye and to shore up support for the legislation, Specter held a hearing on the state of the science on October 19. At the hearing, four scientists and a cancer survivor testified about the importance of exploring the potential of every kind of stem cell: adult, embryonic, and umbilical cord.

Rudolf Jaenisch of the Whitehead Institute at MIT described his recently developed method for producing stem cells without destroying embryos. The technique involves using a form of somatic cell nuclear transfer (a cloning method) that switches off a gene in the donor nucleus and essentially makes it incapable of being implanted in a uterus. Groups that oppose current research because embryos must be destroyed have embraced this new development.

Jaenisch, however, made it clear that his technique has not been scientifically proven, and questions remain about whether the stem cells derived from the method are as “flexible” as the products of embryonic stem cells obtained by other methods. Because of these limitations, Jaenisch believes that research using different types of stem cells must still be pursued. He also emphasized that his research is currently funded by NIH only because he uses mouse tissue and not human cells.

Judith Gasson, professor of medicine and biology at the University of California–Los Angeles, said she believes that current policy is slowing the progress of related research breakthroughs by limiting access to healthy human stem cells. By studying the ability of stem cells to self-renew and differentiate, Gasson hopes to find insights into the pathology of cancer cells and mechanisms that inhibit the renewing process.

The frustration engendered among researchers because of the current limited access to stem cells lines was echoed by John Wagner, a clinical researcher at the University of Minnesota who specializes in blood and bone marrow transplants. Wagner said that although “we are excited about the future potential of these stem cells …never have we suggested that they obviate the need for [embryonic stem] cell research. For example, never have the stem cells from cord blood or adult tissues ever produced heart muscle cells that spontaneously beat or formed islets that secrete insulin, as has been shown repeatedly with [embryonic stem] cells.” In addition, he emphasized that “every discovery with [embryonic stem] cells has furthered our work with stem cells from umbilical cord blood or adult tissues.”

Steven Teitelbaum, a professor of pathology and immunology at Washington State University, argued on behalf of all areas of research. “Opponents of human embryonic stem cell research often articulate their position as a contest between adult and embryonic stem cells,” he said. “This is not a contest between various types of stem cells. It is a contest between us as a society and disease. We should be moving forward on all fronts, adult, embryonic, and umbilical cord stem cells, to win the battle. The tool is not important. What counts is curing our neighbors.”

Asked by Appropriations Chairman Thad Cochran whether there was disagreement and controversy about stem cell research in the scientific community, Teitelbaum said he believed that the “overwhelming opinion of scientists is to go forward with stem cell research” and there was “no major disagreement among the scientific community.”

Senators urge U.S. to return to climate change talks

Just before a major international meeting on climate change in Montreal, Senate Foreign Relations Committee Chair Richard Lugar (R-IN) and Ranking Member Joe Biden (D-DE) on November 15 introduced a Sense of the Senate resolution that calls for the United States to return to international negotiations on reducing greenhouse gases that lead to climate change.

The resolution stipulates that any future accord “establish mitigation commitments by all countries that are major emitters of greenhouse gases, consistent with the principle of common but differentiated responsibilities.” The stipulation responds to concerns about the Kyoto Protocol’s lack of inclusion of emission reduction targets for developing countries, particularly China and India, whose emissions may soon exceed those of most developed countries.

The resolution cites scientific consensus that anthropogenic greenhouse gases threaten the stability of the global climate and notes that climate change presents long-term risks to the U.S. economy and has implications for national security. It would establish a Senate Observer Group to “ensure bipartisan Senate support for any new agreement.”

The resolution was proposed just before the 11th Annual United Nations (UN) Climate Change Conference, which was to be held from November 26 to December 9. The meeting was to feature both the 11th session of the Conference of the Parties to the UN Framework Convention on Climate Change and the first meeting of the Parties to the Kyoto Protocol since the Protocol’s entry into force.

Delegates at the meeting were expected to work on mechanisms to strengthen and implement the Kyoto Protocol. These include implementing the Clean Development Mechanism, which allows industrialized countries to receive credits for their investments in emission reduction projects in developing countries, and a Joint Implementation program, which allows developed countries credit for reductions they make in other participating developed countries. Parties also hope to establish an international emissions trading scheme.

Delegates were also planning to begin negotiations on a climate policy to take effect after the Kyoto Protocol expires in 2012.

House votes to revamp Endangered Species Act

The House on September 29 passed a bill that would reduce certain protections for endangered species. The Threatened and Endangered Species Recovery Act was sponsored by Rep. Richard Pombo (R-CA), who, citing statistics that fewer than 1% of species have been recovered under the Endangered Species Act (ESA), declared the act a failure.

Pombo’s bill would repeal the “critical habitat” program that protects lands necessary for species recovery, replacing it with a nonbinding recovery plan. The bill focuses on private property rights, providing funds to compensate landowners for the loss of use of their land due to the presence of endangered species. Provisions stipulate that if the federal government does not determine whether private land development would adversely affect species within 180 days, then development would be automatically approved.

Early versions of the bill eliminated protection for species that fall under the “threatened” status that often precedes an endangered listing, but an amendment by Rep. Mark Udall (D-CO) restored those protections.

The bill requires the secretary of the interior to define “what constitutes the best available scientific data.” The bill also states: “To the extent that data compiled for a decision or action do not: (1) meet the criteria for the best available scientific data; (2) are not in compliance with OMB (Office of Management and Budget) Guidance for compliance with the Data Quality Act; (3) do not include any empirical data; or (4) are found in sources that have not been subject to peer review in a generally acceptable manner, then the Secretary must undertake measures to ensure compliance with the criteria and/or guidance and may secure empirical data, seek appropriate peer review and reconsider the decision or action based on such compliance actions.”

Many in the environmental and scientific community have argued that these provisions weaken the role of science and favor empirical data over statistical models that are often used in wildlife management.

During the floor debate, a coalition led by Reps. George Miller (D-CA) and Sherwood Boehlert (R-NY) offered a substitute amendment to the bill that would keep some of the protections for species that the Pombo bill eliminated. The Miller-Boehlert amendment was defeated by a vote of 206 to 216.

Legislation to revamp the ESA has not been introduced in the Senate, which is waiting for recommendations to be made by a bipartisan group known as the Keystone Commission.


“From the Hill” is prepared by the Center for Science, Technology, and Congress at the American Association for the Advancement of Science (www.aaas.org/spp) in Washington, DC, and is based on articles from the center’s bulletin Science & Technology in Congress.

Collaborative Advantage

Almost daily, news reports feature multinational companies—many based in the United States—that are establishing technology development facilities in China, India, and other emerging economies. General Electric, General Motors, IBM, Intel, Microsoft, Motorola—the list grows steadily longer. And these new facilities no longer focus on low-level technologies to meet Third World conditions. They are doing the cutting-edge research once done only in the United States, Japan, and Europe. Moreover, the multinationals are being joined by new firms, such as Huawei, Lenovo, and Wipro, from the emerging economies. This current globalization of technology development is, we believe, qualitatively different from globalization of the past. But the implications of the differences have not sunk in with key U.S. decisionmakers in government and industry.

It is not that the new globalization has gone unnoticed. Many observers are concerned that the United States is beginning to fall into a vicious cycle of disinvestment in and weakening of its innovation systems. As U.S. firms move their engineering and R&D activities offshore, they may be disinvesting not just in their own facilities but also in colleges and regions of the country that now form critical innovation clusters. These forces may combine to dissolve the bonds that form the basis of U.S. innovation leadership.

THE UNITED STATES NEEDS TO AGGRESSIVELY LOOK FOR PARTNERSHIP OPPORTUNITIES— MUTUAL-GAIN SITUATIONS—AROUND THE GLOBE.

A variety of policies have been proposed to protect and restore the preeminent position of U.S. technology. Some of these proposals are most concerned with building up U.S. science and technology (S&T) human resources by strengthening the nation’s education system from kindergarten through high school; encouraging more U.S. students to study engineering and science, specifically inducing more women and minorities to pursue science and technology careers; and easing visa restrictions that form barriers to talented foreigners who want to enter U.S. universities and industries. Other proposals include measures to outbid other countries as they offer benefits to attract R&D activities. Still others call for funneling public funds into the development of technology. Some observers, for example, believe that the technological strength of U.S. firms would be improved by the government’s greatly increasing its support of basic research.

Our studies of engineering development centers in multinational home countries and in emerging economies lead us to a concern that many U.S. policymakers and corporate strategists, like the proverbial generals preparing to fight the previous war, are failing to recognize what is distinctive about today’s emerging global economy. Indeed, in some cases they are pinning their hopes on strategies that were not notably successful in past battles. Although our research suggests several trends that may be problematic for the United States, we also see strong possibilities that the nation can benefit by developing “mutual gain” policies for technology development. Doing so requires a fundamental change in global strategy. The United States should move away from an almost certainly futile attempt to maintain dominance and toward an approach in which leadership comes from developing and brokering mutual gains among equal partners. Such “collaborative advantage,” as we call it, comes not from self-sufficiency or maintaining a monopoly on advanced technology, but from being a valued collaborator at various levels in the international system of technology development.

First, however, it is necessary to understand the trends that could lead to a vicious cycle of disinvestment in U.S. S&T capabilities and, most important, how these trends differ from previous challenges to the U.S. system.

Fighting the last war

Half a century ago, the United States was shocked by the ability of the Soviet Union to break the U.S. nuclear monopoly and then to beat the United States in the race to launch a space satellite. Americans were deluged with reports that Soviet children were receiving a far better education in S&T than were U.S. children and that the USSR graduated several times as many engineers each year as did the United States. Worse, the USSR appeared to be targeting its technological resources toward global domination. Twenty years later, Americans were further shaken by the rapid advance of Japanese (and then Korean) firms in industries ranging from steelmaking and auto production to semiconductors. It was widely pointed out that Japan graduated far more engineers per capita than did the United States. As the Japanese seemed on a relentless march to dominance in industry after industry, pundits in the United States commented that whereas the brightest young U.S. students studied law or finance, the brightest Japanese studied engineering. Books were written about Japanese government policies that targeted certain industries, enabling them to gain comparative advantage in key technologies. Some observers advocated the establishment of a U.S. Ministry of International Trade and Industry on the model of Japan’s. As the United States lost its technological edge, many feared that it would also lose its ability to maintain its global power and high standard of living.

The military threat from the Soviet Union was real, but it diminished as a result of weaknesses in the Communist economic and technological systems. The economic threat from East Asia also quickly diminished. To be sure, the United States lost hundreds of thousands of jobs beginning in the 1980s as multinationals moved production to low-cost sites offshore and as new multinationals from Japan and Korea took growing shares of global markets. But even though that shift was painful for certain U.S. companies and for workers who lost their jobs, the U.S. economy as a whole grew along with the growth in world trade, and much of the new U.S. workforce moved into higher value–added activities.

The United States was not saved from either of these threats because it improved its educational system to surpass those of other countries or because it managed to produce more engineers than other countries. The United States had other strengths. It attracted large numbers of talented foreigners to its universities and businesses. It provided the world’s most fertile environment for fostering new business ventures. Its institutions were flexible, enabling human and other resources to be constantly redeployed to more efficient uses. At the end of the past century, the United States was spending far more on R&D than Japan and nearly twice as much as Germany, France, and the United Kingdom combined.

The globalization challenging U.S. firms in the 1970s and 1980s was different from the globalization in the more immediate postwar era. In the 1950s and 1960s, U.S. firms had taken simple, often obsolete technology offshore to make further profits in markets that were less demanding than those at home. That era of globalization was dominated by U.S. (and some European) firms. Wages could be far higher in the United States than elsewhere because the U.S. workforce, backed by more capital and superior technology, was far more productive. Firms did not need to worry much about foreign competition. Moreover, trade restrictions protected the privileged situation enjoyed by U.S. companies and workers.

Beginning in the late 1960s, however, it was becoming clear that the world was moving to a second generation of postwar globalization. One of the most notable facets of this new wave was the emergence of large numbers of non-Western firms to positions of global strength in automobiles, consumer electronics, machine tools, steelmaking, and other industries. U.S. firms often were blindsided by the emergence of these new competitors, and many domestic firms at first refused to take them seriously. It was thought that the Japanese could make only lower grades of steel, unsophisticated cars, or cheap transistor radios, but that U.S. firms would hold on to the higher value–added, top ends of these markets. In part because of this arrogance, U.S. firms sought “windfall” income by actually selling technology to firms that would soon be their competitors. Meanwhile, capital and technology were becoming more mobile, and Japan and a few other countries became major sources of innovation and global finance. The momentum of the East Asian firms was further increased as these firms enjoyed the advantage of home and nearby markets that were growing faster than those in the United States and Europe.

When the U.S. technology system found itself challenged by the Japanese and others, many firms sought to reassert their dominance by lobbying for the protection of their home markets and by using their overwhelming strengths in basic technology and their access to capital to maintain competitiveness. Still, many leading U.S. firms, such as RCA, Zenith, and most of the integrated steel producers, failed. But others, such as GE and Motorola, thrived in the new environment. Those that succeeded were relatively quick to give up industries where there was little chance to compete against their new rivals, quick to find new opportunities outside the United States, and often quick to find new partners.

The globalization of today represents another quantum leap. We believe it is different enough to characterize it as “third-generation globalization.” It stems from the emergence of a new trade environment in the 1990s that has vastly reduced barriers to the flow of goods, services, technology, and capital. The move to a new environment was accelerated by the development and diffusion of new communications, information, and work-sharing technologies over the past decade.

Strategies that may have served U.S. firms in the second-generation globalization will not work in the third-generation world. The new emerging economies are an order of magnitude larger than those that emerged a generation ago, and they are today’s growth markets. Nor does the United States, despite its undeniable strengths, enjoy global dominance across the range of cutting-edge technologies. Moreover, U.S. multinationals are weakening their national identities, becoming citizens of the countries in which they do business and providing no favors to their country of origin. This means that the goal advocated by some U.S. policymakers of having the United States regain its position of leadership in all key technologies is simply not feasible, nor is it clear how the United States would retain that advantage when its firms are only loosely tied to the country.

We believe that there are opportunities as well as challenges in the third-generation world. Our research, however, does suggest some other reasons to be concerned about certain developments that are now taking place.

Current trends could lead to an unnecessary weakening of one of the foundations of U.S. economic strength: the country’s national and regional innovation systems. Four factors have surfaced in our research that, in combination, may undermine the innovation capacity of U.S.-based firms and technology-savvy regions of the country.

The bandwagon syndrome. As U.S. multinationals join the bandwagon of offshore technology development, they often seem to go beyond what makes economic sense. Top management at many firms are coming to believe that they have to move offshore in order to look as though they are aggressively cutting cost—even if the offshoring does not actually result in demonstrated savings. None of the companies that we studied conducted systematic cost/benefit analyses before moving technology development activities offshore.

The snowball effect. The more that U.S. multinationals move activities offshore, the more sense it makes to offshore more activities. When asked what activities will always have to be done in the United States, the engineering managers we interviewed could not give consistent and convincing answers. One R&D manager said he found it difficult to engage in long-term planning because he was no longer sure what capabilities remained at his company after recent waves of technology outsourcing.

The loss of positive externalities. Some multinationals are finding that if their technology is developed offshore, then it makes more sense to invest in offshore universities than in domestic universities. Support for summer internships, cooperative programs, and other efforts at U.S. universities becomes less attractive. As one study participant noted, “Why contribute to colleges from which we no longer recruit?”

The rapid rise of competing innovation systems. Regional competence centers or innovation clusters in the United States grew haphazardly in response to local market stimuli. China, India, and other countries are much more explicitly strategic in creating competence and innovation centers. Although markets have worked well for the U.S. centers, it is essential that these centers have a better sense of where their overseas rivals are moving, what comparative advantages provide viable bases for local development, and how to strengthen them.

As these developments have unfolded, many U.S. firms or their domestic sites are now running the risk of losing their capabilities to innovate. At best, they may be able to hold on to only a diminishing advantage in brand-name value and recognition.

Another factor that is proving important is the declining ability of the United States to attract the world’s best S&T talent. As an open society and the world’s leading innovator, the United States was long able to depend heavily on the inflow of human capital. Although the market impact of high-skill immigration has been widely debated, it is clear that this inflow eased the pressure to increase the domestic S&T workforce through either educational or market inducements.

The United States was highly dependent on foreign-born scientists and engineers in 1990, and its growing need for S&T human resources in the 1990s was met largely through immigration. An issue widely discussed and analyzed in depth by the National Science Foundation (NSF), among others, is that the inflow of immigrant S&T personnel began to slow down beginning in the late 1990s. Coupled with the longer downward trend of U.S. students entering S&T fields and careers, this raises concerns about whether the United States will have adequate personnel to maintain its technological leadership.

The changes in migration patterns go beyond just the availability of a science and engineering workforce. Immigrants have been an important source of technology entrepreneurship, particularly in information technology. Less noted is the potentially quite large loss of technology entrepreneurship and innovation with the decline in the number of emerging-economy S&T people who might start businesses, and the return of growing numbers of successful U.S.-based entrepreneurs to their home countries to take advantage of opportunities there.

It seems clear from our interviews, however, that efforts to solve the perceived U.S. technology problem by emphasizing policies to induce more U.S. students to major in engineering are no more likely to succeed than did similar efforts made in response to the Japanese challenge. None of the engineering managers we interviewed mentioned a shortage of new graduates in engineering as a problem. Indeed, some managers said they would not recommend that their own children go into engineering, since they did not see it as a career with a bright future. Several said they were not allowed to increase “head count” in the United States at all; if they wanted to add engineers, then they had to do it offshore. Increasing the number of engineers coming into the system might do no more than raise the unemployment rates of engineers. In fact, if increasing the short-term supply of scientist and engineers leads to increased unemployment and stagnant wages, it will further signal to students that this is not a good career choice.

To be sure, there are good reasons to increase the representation of women and minorities in U.S. S&T education programs. It also is desirable to increase the technical sophistication of U.S. students more broadly, and to make it attractive for those who are so inclined to go into the S&T professions. But “throwing more scientists and engineers at the problem” should not be sought as a strategy to regain a U.S. monopoly over most cutting-edge technologies. It would be a mistake to try to replicate the technological advantages enjoyed by other countries in these areas. The United States cannot match the Chinese or Indians in numbers of new engineering graduates.

Rather, the United States needs to develop new strengths for the new generation of globalization. With U.S. and other multinational firms globalizing their innovation work, emerging economies developing their education systems and culling the most talented young people from their huge populations, and communication technologies enabling the free and fast flow of information, it is hard to imagine the United States being able to regain its former position as global technology hegemon.

What the United States needs now is to find its place in a rapidly developing global innovation system. In many cases, strong companies are succeeding through the integration of technologies developed around the world, with firms such as GE, Boeing, and Motorola managing project teams working together from sites in the United States, India, China, and other countries. It is unclear, however, the extent to which it would benefit the United States to subsidize the technology development efforts of companies headquartered in the United States. For example, it is Toyota, not GM, that is building new auto plants in the United States; it is China, not the United States, that owns, builds, and now designs what were IBM-branded personal computers; and it is countries ranging from Finland to Taiwan that are doing leading-edge electronics development. The one area overwhelmingly dominated by the United States, packaged software development, employs less than one half of 1% of the workforce and is unlikely to have a large direct impact on the economy, although use of the software may contribute significantly to productivity increases in other industries.

As a country, the United States is strong in motivating university researchers to start new enterprises, from biotechnology to other areas across the technology spectrum. The United States is not as strong when it comes to projects where brute force applications of large numbers of low-wage engineers are required. Nor is the United States as strong in developing technologies for markets very different from its own. Competitive strategies from the past will not change this situation. No amount of science and engineering expansion will restore U.S. technology autarchy. Instead, a new approach—collaborative technology advantage—is needed to develop a vibrant S&T economy in the United States.

Policies for strength

We believe that the government, universities, and other major players in the U.S. innovation system need to work toward three fundamental major goals:

REGIONS HOSTING OR DEVELOPING TECHNOLOGY COMPETENCY CENTERS NEED TO LOOK CLOSELY AT THE INTERNATIONAL COMPETITION. THEY NEED TO IDENTIFY NICHES THAT EXIST OR CAN BE DEVELOPED IN THE CONTEXT OF A GLOBAL INNOVATION SYSTEM.

First, the United States should develop national strategies that are less focused on competitive, or even comparative, advantage in the traditional meaning of these terms, and are more focused on collaborative advantage. It is tempting to think of technology in neomercantilist terms. National security, both militarily and economically, can depend on a country’s ability to be the first to come out with new technologies. In the 1980s, it was widely believed that Japan and other East Asian economies were using industrial policies to create comparative advantage in high-tech industries in the belief that these industries provided unusually high levels of spillover benefits. U.S. policymakers were advised to counter these moves by investing heavily in high technology, restricting imports of high technology, and promoting joint technology development programs by U.S. firms.

To be sure, it makes sense for U.S. policy to ensure that technology development activities are not attracted away by foreign government policies, where the foreign sites do not have legitimate comparative advantages. It also makes sense to make sure that the United States retains strength in technologies that truly are strategic. An important, but difficult, task is finding ways to develop policies that strengthen U.S. S&T capabilities when market pressures are leading firms to disinvest in their U.S. capacity, including their university collaborations.

To start, the nation needs to counter the bandwagon and snowball effects that are driving the outsourcing of technology in potentially harmful ways. To do this, it will be necessary to develop new tools to assess the costs and benefits of the outsourcing of technology development, particularly tools that more comprehensively account for the costs. There also is a need to develop a better understanding of what technology development activities are most efficiently colocated, so that the United States does not end up destroying its own areas of comparative advantage. NSF and other funding agencies could sponsor such studies.

Foreign-Born S&E Workers in the United States, 2000

Source: U.S. Census Bureau.

But then the United States needs to aggressively look for partnership opportunities—mutual-gain situations—around the globe. National government funding agencies, such as NSF, and regional governments can support projects that work toward these aims. Designers of tax policies at all levels also can redirect policies in these directions. Some of these mutual-gain situations will involve the creation of technologies that unequivocally address global needs to minimize environmental damage or reduce demands on diminishing resources.

Regions hosting or developing technology competency centers need to look closely at the international competition. They need to identify niches that exist or can be developed in the context of a global innovation system. Existing artificial barriers for certain industries and technologies will continue to fall at a rapid pace as the world continues its path to globalization. Alliance may be possible between U.S. centers of technology competence and those in other countries.

We believe that one area in which the United States enjoys comparative advantage is its patent system. To a large degree, the U.S. patent office serves as the patent office for the world. Foreign firms want access to the U.S. market, so they must disclose their technology by filing for patents in the United States. It is essential that the United States preserve (and perhaps extend) this advantage.

As a second goal, the United States needs to help create a world based on the free flow of S&T brainpower rather than a futile attempt to monopolize the global S&T workforce. The United States can further develop its advantage as an immigrant-friendly society and become the key node of new networks of brain circulation. Importantly, the United States needs to redesign its immigration policies with the long view in mind. New U.S. policies should focus on the broad goal of maximizing the innovation and productivity benefits of the global movement of S&T workers and students, rather than the shortsighted aim of importing low-cost S&T workers as a substitute for developing the U.S. domestic workforce. This implies that an alternative to the current types of visas that cover foreign-born students and S&T workers—such as the H-1b visa—needs to be developed. Promoting the global circulation of students and workers, while not undermining the incentives for U.S. students and workers, will create human capital flows that support collaborative advantage. The goal should be to make it easier for talented foreign S&T people to come, study, work, and start businesses in the United States, and also make it easier for foreign members of U.S. engineering teams to come to the United States to confer with their teammates. Visas shouldn’t be used to have permanent workers train their replacements or to distort market mechanisms that provide incentives for long-term S&T workforce development.

Immigration policies that support global circulation would allow easy short-term entry of three to eight months for collaboration with U.S.-based scientists and engineers. Facilitating cross-border projects actually helps retain that work here; our research finds that when projects stumble because of collaboration difficulties, the impulse is to move the entire project offshore. When U.S. S&T workers have more opportunities to work with foreign S&T workers, they broaden their perspective and better understand global technology requirements. A new type of short-term, easy-to-obtain visa for this purpose would strengthen the U.S. collaborative advantage while not undermining the incentives for U.S. students to pursue S&T careers and continuing to attract immigrants who want to become part of the permanent U.S. workforce.

Finally, in working toward the first two goals, the United States needs to develop an S&T education system that teaches collaborative competencies rather than just technical knowledge and skills. U.S. universities must restructure their S&T curricula to better meet the needs of the new global innovation system. This may include providing more course work on systems integration, entrepreneurship, managing global technology teams, and understanding how cross-cultural differences influence technology development. Our findings suggest that it is not the technical education but the cross-boundary skills that are most needed (working across disciplinary, organizational, cultural, and time/distance boundaries). Universities must build a less parochial, more international focus into their curricula. Both the implicit and explicit pedagogical frameworks should support an international perspective on S&T—for example, looking at foreign approaches to science and engineering—and should promote the collaborative advantage perspective that recognizes the new global S&T order. Specific things that could be done include developing exchange programs and providing more course work on cross-cultural management, and encouraging firms to become involved in this effort through cooperative ventures, internships, and other programs.

Our research suggests that the new engineering requirements, like the old, should build on a strong foundation of science and mathematics. But now they go much further. Communication across disciplinary, organizational, and cultural boundaries is the hallmark of the new global engineer. Integrative technologies require collaboration among scientific disciplines, between science and engineering, and across the natural and social sciences. They also require collaboration across organizations as innovation emanates from small to large firms and from vendors to original equipment manufacturers. And obviously they require collaboration across cultures as global collaboration becomes the norm. These requirements mandate a new approach not only to education but to selecting future engineers: colleges need to recognize that the talent required for the new global engineer falls outside their traditional student profiles. Managers increasingly report that although they want technically competent engineers, the qualities most valued are these other attributes.

Education policy must reflect the new engineering paradigm. It must structure science and engineering education in ways that encourage students to pursue the new approaches to engineering and science. Indeed, we believe that the new approaches will make careers in science and engineering more exciting and attractive to U.S. students. Information technology, for example, is famous for innovation that comes from people educated in a wide range of fields working across disciplines. The education system needs to better understand the new engineering requirements rather than attempt to shore up approaches from a previous era. This is a challenge that goes beyond providing more and better science and math education. It does, of course, require strengthening basic education for the weakest students and schools, but it also requires combining the best of education pedagogy with an understanding of the requirements of the “new” scientist and engineer.

Leadership in developing a global science, technology, and management curriculum may also attract more international S&T students to U.S. universities. Other desirable changes may include collaborative agreements with universities in emerging economies that enable U.S. students to be sent there for part of their education, thus helping to promote the overall move to brain circulation. Government support might be needed to make such programs economically viable for U.S. universities—for example, by making up some of the tuition differences between U.S. and foreign universities.

We believe that progress toward these goals will lead to a future where U.S. residents can more fully benefit from the creativity of S&T people from other countries, where the U.S. is still a leader in global innovation, and where a stronger U.S. system is revitalized by accelerated flows of ideas from around the world.

Yes, in My Backyard: Distributed Electric Power

More than four generations of U.S. residents have come to accept the notion that electricity is best produced at large centralized power plants owned by monopolies. As a result, utilities continue to be protected from market discipline, and few people challenge the wildly inaccurate assumption that the United States has already achieved maximum efficiency in producing electricity.

For the first time in almost a century, an array of innovations (including modern generators, motors, and computers) could alter the electricity industry’s basic structure. These new devices offer increased efficiency and reliability in the production and use of electricity, as well as reduced pollution. However, an array of policy barriers, built up over decades to protect utility monopolies, discourages modern technologies and entrepreneurs.

Electricity innovation is critical because the U.S. power system is a rickety antique. The average generating plant was built in 1964 using 1959 technology, and more than one-fifth of the nation’s power plants are more than 50 years old. Utilities have not improved their delivered efficiency in some 40 years, and today’s high-tech businesses demand more reliable power than the current system can provide. High-voltage transmission lines, moreover, were designed before planners ever imagined that enormous amounts of electricity would be sold across state lines, and, consequently, the distribution system often becomes overloaded, resulting in blackouts.

The consequences of the system’s inefficiencies and stresses are little noticed, yet staggering. The industry’s stagnant efficiency means that two-thirds of the fuel burned to generate electricity is wasted. Meanwhile, deficiencies in the quality and reliability of the power supply—ranging from millisecond fluctuations that destroy electronic equipment to the summer 2003 blackout that left 50 million people without power—are hurting the nation’s high-tech industry and annually costing residents $119 billion, according to the Electric Power Research Institute. Power production also is the nation’s largest source of pollution, spewing tons of mercury, sulfur dioxide, and other contaminants into the air and waters.

The efficiency limit

In fact, the U.S. power system began moving away from centralized generation almost 40 years ago, but the transition went virtually unnoticed. For the previous several decades, electrical engineers had developed boilers that could withstand enormous and increasing amounts of heat and pressure. Boilers could reach temperatures exceeding 1,050ºF and pressures above 3,200 pounds per square inch, turning water into dry steam. Utility companies had employed an array of new alloys to protect a power plant’s metal from corrosion and fatigue. They also met rising power demands with larger turbines, and they demanded that equipment manufacturers build bigger and bigger units, often without taking the time to test and learn from each incremental increase.

But progress stalled in 1967, which represented the peak in power plant efficiency. Despite continuing efforts by utility engineers, no longer would new generating equipment be more efficient than the machinery it replaced. Continued expansion would no longer mean lower prices for the consumer.

Scientists, using thermodynamic theory and calculating the limits of materials, long had predicted a steam generator’s maximum efficiency to be approximately 48%. Thus, for every 100 units of fuel burned, a power plant could generate at most 48 units of electricity. The remaining 52 units would become low-temperature heat, usually disposed of as waste into adjacent rivers or the air.

Yet even before efficiencies reached 35%, utility managers began to realize that their larger systems were not performing well. Turbine blades twisted frequently, furnaces could not maintain high temperatures, metallurgical problems became apparent in boilers and turbines, and a slew of other defects retarded reliability and performance. Large plants, because they tended to be custom-built on site rather than prefabricated in a factory, also required expensive construction techniques. A General Electric manager later admitted that the rapid growth in the size of generators and boilers caused “major failures leading to the need for costly redesigns, costly rebuilds in the fields, and the additional costs involved for purchased power.”

Power executives slowly became skeptical of giant generators, and the era of centralization waned. “Central thermal power plants stopped getting more efficient in the 1960s, bigger in the 1970s, cheaper in the ‘80s, and bought in the ‘90s,” says the Rocky Mountain Institute. Reflecting centralization’s efficiency limit, “smaller units offered greater economies from mass production than big ones could gain through unit size.”

Compared with the decades-old, efficiency-stagnant generators protected by tradition-bound utility monopolies, an array of modern equipment offers opportunities for new and innovative players to enter the electricity market. Most discussions of alternative energy strategies tend to focus on wind turbines, fuel cells, and solar photovoltaics, but numerous less “sexy” generators are challenging centralization and providing increased efficiency and decreased emissions.

One of the hottest options is cogeneration. This ingenious approach, a primitive model of which Thomas Edison employed at his Pearl Street power plant in New York City, taps the low-temperature heat that remains after electricity is generated and directs it to other uses. A cogenerator captures the usually wasted heat to warm buildings, power chillers, dry paints and materials, and run a variety of industrial processes. The benefit of cogeneration—sometimes called “combined heat and power”—is efficiency. The hybrid machines more than double the deployment of useful energy. A typical power plant producing only electricity is approximately 32% efficient, whereas a cogenerator producing both electricity and heat can be as much as 80% efficient. Despite the economic downturn between 1998 and 2002, the United States added some 31,000 megawatts of cogeneration capacity during this period— an amount equal to approximately 60 large coal-fired power plants, each producing roughly 500 megawatts. Cogenerators now supply some 82,000 megawatts of capacity, which is approximately 8.6% of U.S. generation. The Department of Energy has set a 92,000-megawatt goal for 2010 and has determined that the potential for cogeneration nationwide exceeds 200,000 megawatts.

Innovative generators also create opportunities for energy recycling. At U.S. Steel’s Gary Works along Lake Michigan, for instance, a 161-megawatt cogenerator (enough to supply a small town) is powered by the heat once released from the giant blast furnaces. At Ispat Inland’s steel-making operation in East Chicago, Illinois, a similar unit provides 95 megawatts of electricity as well as process steam. Sixteen heat-recovery boilers capture and use the waste heat from Ispat’s metallurgical coke-making facility, and a desulfurization process and fabric-filter system make the company the steel industry’s environmental standard. According to Primary Energy, the company that operates the cogenerators, recycled heat could generate a substantial 45,000 megawatts of electricity nationwide and reduce carbon dioxide pollution by 320 million tons. Says company chair Thomas Casten: “It is every bit as environmentally friendly as heat and power from renewable sources, including solar energy, wind, and biomass.”

Small generators have been used for decades, but recent technological advances have made possible a new generation of clean and highly efficient units. Improvements in truck turbochargers and hybrid electric vehicles have spurred the development of a slew of microturbines, which feature a shaft that spins at up to 100,000 revolutions per minute and drives a high-speed generator. Because microturbines use devices called recuperators to transfer heat energy from the exhaust steam back into the incoming air stream, they are far more efficient than other small combustion turbines. The recuperators also lower the exhaust temperature to the point where little nitrogen-oxide pollution is formed. Mass production should soon lower costs and make them attractive to the residential market. Microturbines range in size from 24 kilowatts (enough to power a home) to 500 kilowatts (enough to power a McDonald’s), and their operating costs are about a third of a comparable diesel generator’s. Maintenance costs also are relatively low, because microturbines have only one moving part: the high-speed shaft spinning on air bearings.

Most of these modern innovations allow for onsite, non-centralized, and relatively small-scale electricity production. Such decentralized generation avoids the typical transmission and distribution losses of 10 to 20%. It also offers consumers the opportunity to optimize their power systems, increase efficiency, lower costs, enhance productivity, and reduce emissions. Today’s dominant utility approach— centralized power plants for electricity and separate units for thermal energy to heat or cool buildings—might have made sense with the state-of-the-art generation and distribution technologies of the 1950s, but smaller and dispersed electricity systems now provide economic and environmental advantages.

The interplay of advanced technologies and inno-vation-based polices could take the power industry down divergent paths. Clarity on the dominant trends may be a few decades away, and the intervening years may witness numerous regional experiments. The Dutch, for instance, are advancing distributed generation. Iceland is moving toward a hydrogen-based economy. U.S. northeastern states are considering a trading program for carbon dioxide emissions, whereas Texas is becoming the nation’s wind-energy capital.

Differing paths notwithstanding, the most likely trend favors dispersed over centralized generation. Most of today’s technological innovations suggest a continuing shift away from an electricity system based on giant generators linked to customers by a vast transmission and distribution network. More promising is a more efficient grid that links decentralized turbines, cogenerators, energy recyclers, fuel cells, or renewable technologies. If there is no economic advantage to building giant 1,000-megawatt plants, then the flexibility offered by small facilities becomes a significant advantage.

Localized power avoids or reduces distribution bottlenecks and curtails the need for massive investments in long-distance (and unpopular) transmission lines. Some 10% of electricity is sacrificed during the typical high-voltage transmission process as a result of resistance and heat loss. During peak hours, that number rises to 20%. Thus, congestion-related losses require the construction of extra generators and lines. Although regional power grids remain needed for wholesale exchanges, the costs of line losses would shrink if electricity producers were located close to power consumers.

Harsh weather, terrorist attacks, and simple accidents have highlighted the vulnerability of the centralized power system, with its large power plants and far-flung transmission wires. In contrast, smaller dispersed units provide more security and resiliency. To state the obvious, a destroyed microgenerator has smaller impacts than damage to a nuclear reactor or high-voltage line.

A plethora of distributed generators also can provide the highly reliable and high-quality power demanded increasingly by the array of businesses that cannot afford energy disruptions. Similarly, onsite units can avoid most power outages and surges that result from problems with the grid, as evidenced in summer 2003 when Kodak’s factory in Rochester, New York, continued to operate during the massive blackout that left 50 million people without power in the Northeast and Midwest.

Perhaps decentralization’s key benefits are financial. Simply put, smaller modules are less risky economically because they take less time to devise and construct, obtain greater efficiencies, and enjoy portability. Small generators, which can be built in increments that match a changing electricity demand, allow for more reliable planning. Large units, in contrast, take several years to complete, during which time forecasts of electricity demand can shift dramatically, perhaps eliminating or reducing the need for the investment. Big plants also invariably “overshoot” by adding huge supplies that then remain idle until the expected demand “catches up.”

Even fervent distributed-generation advocates, however, do not envision the total abandonment of today’s centralized generators or long-distance transmission lines. Rather, the goal is a more equal mix of central power and distributed energy. Compared to the present system’s virtually total reliance on large plants and long lines, a mixed approach would provide substantial economic, environmental, and security benefits. The American Gas Association forecasts that by 2020, small distributed generators will account for 20% of the nation’s new electric capacity.

Although the U.S. market for distributed generation is substantial, perhaps the greatest potential is with the world’s 3 billion poor people who have no reliable access to electricity. Onsite generators can save the $1,500 per kilowatt that developing countries would be required to spend on transmission lines. They could enable those nations to leapfrog the power grid, eliminating the need to build an expensive system based on giant generators and high-voltage wires, much the same way in which some countries are using cell phone technology to leapfrog the need to string expensive telephone landlines. If electricity consumption in developing countries continues to rise rapidly, then dispersed technologies, including gas turbines, recycled energy, wind turbines, and fuel cells, also may be the best means to minimize carbon dioxide emissions and limit demand for oil and natural gas from the world’s volatile regions. From the U.S. perspective, developing countries also could become a large export market for innovative companies.

Removing policy barriers

Despite the many and varied benefits that modern technologies can bring to the nation’s electric system, scores of laws and regulations protect old-line monopolies and lock out the most promising innovations. What is needed is a policy revolution that removes the barriers to these technological advances and obtains innovation’s benefits.

The chief barrier-busting proponents have been independent generators (who want to enter the electricity business), industrial and commercial customers (who want to shop for lower-priced power), and economists (who favor the marketplace over regulation). Some of the manufacturers of onsite generators and some industrial customers, such as Caterpillar and Dow Chemical, respectively, are huge and enjoy substantial political clout, yet these innovation advocates have not been able to match the muscle of well-funded and well-positioned monopolists and their supporters. Relative to the innovative new companies in the telecommunications, airline, and trucking industries, independent generators have made only limited progress on the policy front.

Competition advocates enjoyed their first success in 1978 with passage of the Public Utility Regulatory Policies Act, which enabled cogenerators and renewable energy suppliers to sell electricity to regulated utilities. In the mid-1980s, deregulating the natural gas market lowered the price and increased the availability of that relatively clean fuel. The Energy Policy Act of 1992 and subsequent rulings by the Federal Energy Regulatory Commission (FERC) allowed unregulated independent generators to sell wholesale power over the grid to distant customers.

Competition opponents, however, point to California’s 2001 electricity disaster, when prices skyrocketed and the state’s largest utility declared bankruptcy. Yet that state’s power industry “restructuring” resulted largely from ill-considered political deals made in the mid-1990s that tried to appease virtually every interest group. The compromises may have produced a unanimous vote in the state legislature in 1996, but in hindsight, according Paul Joskow, an economics professor at the Massachusetts Institute of Technology,“getting it done fast and in a way that pandered to the many interests involved became more important that getting it right. The end result was the most complicated set of wholesale electricity market institutions ever created on earth and with which there was no real-world experience.” According to the Congressional Budget Office, “Deregulation itself [in California] did not fail; rather, it was never achieved.”

In trying to prevent any one company from obtaining too much market control, for instance, California politicians restricted long-term power contracts and forced all generators to deal in the volatile spot market. Electricity suppliers, therefore, had to hash out prices daily in a centralized power exchange, which mandated that utilities pay the highest price offered on any given day. Clever marketers soon learned how to “game” the system, concocting “round-trip” trades that sent power back and forth across state lines in order to inflate sales volumes and artificially drive up short-term prices. Unable to pass on the higher costs, Pacific Gas & Electric, the state’s largest utility, filed for bankruptcy, and the state’s other two utilities teetered. Rolling blackouts became common, forcing motorists to navigate intersections without traffic lights and consumers to use flashlights at grocery stores.

Other states, in contrast, have realigned their power industries and obtained positive results. Texas, for instance, allows distribution utilities to purchase power through long-term contracts as well as on the spot market. That flexibility and the efforts by state officials to resolve constraint points have produced a vibrant electricity market that embraces innovative technologies. In 2004, independent suppliers in Texas offered 60% of the electricity used by commercial and industrial customers and 14% of the power demanded by residential consumers. Pennsylvania officials, who also allow long-term contracts and attack barriers to competition, calculate that the state’s restructuring efforts have saved residential and industrial customers some $8 billion.

The shift to innovation will take time, and establishing market rules will require a good bit of trial and error. Electricity markets do not occur naturally; rather, they are developed. With natural gas deregulation, the FERC went through numerous revisions over seven years before effectively opening access to alternative natural gas suppliers.

Innovation-enhancing markets will require the elimination of numerous regulatory, financial, and environmental obstacles. Current rules designed to support the status quo—centralized steam-powered generators controlled by regulated monopolies—include restrictive standards regarding interconnection standards and outmoded equipment-depreciation schedules. Dominant power companies, for instance, often block competitors from connecting to the grid or impose obsolete and prohibitively expensive interconnection standards and metering requirements that have no relation to safety. Depreciation schedules for electricity-generating equipment (which are, on average, three times longer than those for similar-sized manufacturing equipment) discourage the introduction of innovative technologies that spur efficiency and productivity.

Today’s utility monopolies, moreover, enjoy the sole right to string wires. Although private firms can construct natural-gas pipelines, and developers can build telephone lines, steam tunnels, and Internet extensions to their neighboring buildings, anyone running an electric wire across a street will be sent to jail. If the rules changed, few businesses would be likely to construct their own electric lines, just as there are few independent gas pipelines. But the threat of competitive wires would transform the power industry and end the monopolies’ ability to block entrepreneurs from generating their own electricity.

Utilities, in order to protect their monopolies, also impose exorbitant rates for backup power, which most entrepreneurs need because they regularly buy and sell on the grid. Distribution monopolists typically assume that every single independent generator will be out of service at the very same time, thereby having independent generators demand backup power when it is most rare and expensive. The monopolist’s high backup rates are comparable to a home insurance company trying to set its annual premium at a house’s full replacement price.

Environmental laws must change as well if energy efficiency is to be achieved. The United States currently measures air emissions based on fuel inputs, usually stated as pounds of pollutants per unit of fuel. Unfortunately, this input-based approach rewards power plants that burn a lot of fuel, regardless of their efficiency. In contrast, output-based regulations would calculate emissions based on the amount of electricity generated, thereby rewarding innovative generators that supply more electricity with reduced emissions.

With assets exceeding $600 billion and annual sales above $260 billion, electric utilities are the nation’s largest industry. No doubt restructuring such a behemoth is difficult, the obstacles to change are formidable, and most utility monopolies are working aggressively to remain protected from entrepreneurs.

But given its rickety nature, the U.S. electricity system must change. Innovation’s environmental benefits alone are critical, because power plants spew almost half of all North American industrial air pollutants, and 46 of the top 50 emitters are electricity generators. In contrast, new gas turbines emit 500 times less nitrogen oxide per kilowatt-hour than today’s older power plants.

Businesses also increasingly need more reliable power than the status quo can provide. Hewlett-Packard estimates that a 15-minute outage at one of its chip manufacturing facilities would cost $30 million, or half the plant’s annual power budget. According to a microchip executive, “My local utility tells me they only had 20 minutes of outages all year. I remind them that these four five-minute episodes interrupted my process, shut down and burnt out some of my controls, idled my workforce. I had to call in my control service firm, call in my computer repair firm, direct my employees to ‘test’ the system. They cost me eight days and millions of dollars.” No wonder more and more corporations are installing their own onsite generators in order to control costs and increase security. The First National Bank of Omaha, for instance, purchased stacks of fuel cells after the local utility’s one-hour power outage shut down its data processing network at a cost of $6 million.

Many other developed countries have promoted entrepreneurs over monopolists, and they are enjoying numerous benefits. In the four years since Australia restructured its utilities, wholesale power prices fell 32% in real terms, and air quality improved. Six years after the United Kingdom began to deregulate electricity sales and to shift from coal to natural gas, carbon dioxide emissions from power generation fell 39% and nitrogen oxides 51%. Even limited competition in the United States since the Public Utility Regulatory Policies Act of 1968 helped prompt a 32% drop in wholesale electricity prices.

Unless the United States further alters today’s centralized and monopolized paradigm, when the rest of the world electrifies and begins to enjoy the drudgery-reducing benefits of modern appliances, the resulting environmental damage will be staggering. The nation has a moral obligation, therefore, both to help provide power to the world’s poor and to radically alter the ways in which electricity is generated and delivered.

Timing is critical if the United States is to capture additional economic and environmental benefits. In the next several years, much of the nation’s aging electrical, mechanical, and thermal infrastructure will need to be replaced, offering a unique opportunity to substitute efficient generators for outmoded power plants and old industrial boilers.

Maintaining the status quo is no longer an option, in part because the current monopoly-based structure has forced U.S. residents to spend far more than needed on outmoded and polluting energy services. Achieving the benefits of innovation requires the elimination of numerous regulatory, financial, and legal barriers. If policymakers can restructure the electricity industry based on the principles of technology modernization, market efficiency, and consumer choice, they will bring about immense benefits for both the economy and the environment.

The Kyoto Placebo

Global warming is a stealth issue in U.S. foreign policy. Even as the effects of mounting carbon dioxide (CO2) begin to make themselves felt, and huge multinationals such as General Electric and Shell announce their own plans of action, the U.S. government still acts as if there is no urgency to the task of cutting CO2 emissions. The moment will shortly be upon us when a solution is needed, fast.

Advocates for action and the 157 ratifying countries (including the European Community) of the 1997 Protocol to the United Nations (UN) Framework Convention on Climate Change, negotiated in Kyoto, console themselves with the thought that at least the Kyoto Protocol has set in place the building blocks of a workable plan for combating the certainty of global warming, in the form of emission targets coupled with international emissions trading. It sets a timetable for capping and then gradually reducing greenhouse gas (GHG) emissions; the developed countries have agreed to roll back their overall emissions to at least 5% below 1990 levels in the 2008–2012 “commitment” period. These are laudable goals, to be sure, but the plan as outlined is anything but workable

Some of the Protocol’s limitations are well known. Developing countries, for example, are currently exempt from CO2 caps, and the Protocol, strongly opposed by the Bush administration, does not include the United States. Another weakness is one that the Protocol shares with most international environmental agreements: a lack of teeth. Subsequent negotiations have been unable to produce agreement on the penalties for failure to meet the plan’s goals. Worrisome as those liabilities are, though, a more serious problem has largely escaped notice.

The studiously ignored elephant in the room is the shaky and unproven trading system on which the Kyoto Protocol depends. It uses “flexible mechanisms” that allow participants to meet their targets by purchasing emission credits rather than making reductions themselves. Thus, those that have an easier time controlling their emissions may sell credits to those experiencing greater difficulty or higher costs. The Clean Development Mechanism (CDM) facilitates trading with the developing world. In Joint Implementation, a “donor” country invests in pollution abatement measures in a “host” country in return for emission credits. In theory, this emissions market will allow participating countries to meet their CO2 goals, with minimal disruption to their economies.

The trouble with the trading scheme is that it is based on a handful of unusual, and in some ways experimental, programs in the United States. Though heavily promoted by the World Bank, U.S.-style environmental trading has yet to be tested on a global scale and has never been successfully deployed on a national level in the developing world. Nor is it likely to be. Many countries do not have, and are unlikely to acquire, the oversight and enforcement mechanisms to make global (or domestic) emissions trading work.

Past is prologue

Any effort to control pollution demands a combination of reliable laws, vigilant monitoring of emissions, and consistent enforcement. The best indicator of the ability of most developing countries to provide these ingredients is how they have done in regulating their growing (and choking) locally produced pollution. It is not a record that inspires much hope.

TRADING AND TECHNOLOGY WILL PLAY A ROLE IN TAMING CLIMATE CHANGE, BUT THEY ARE ONLY A PIECE OF THE SOLUTION, NOT THE ENTIRE ANSWER.

The 1972 UN Conference on the Human Environment in Stockholm marked the beginning of a large global effort to build environmental regimes in countries around the world. In this first great wave of environmental law drafting, many countries patterned their requirements on the apparent growing success of the National Environmental Policy Act, which introduced environmental impact assessment in the United States. In the three decades since then, most countries can point to statutes and environmental agencies or ministries: the formal trappings of environmental protection. Most of these have proved to be frustrating paper exercises. In India, to take a typical example, laws and policies proliferated while air and water quality in the major cities declined, to the extent that Delhi gained the distinction of being the fourth most polluted city in the world.

Frustration with laws that didn’t produce cleaner cities or sustainable forests led to a search for alternatives. The theory developed that polluters could be motivated to put environmental controls in place on the basis of economic self-interest if they were allowed to trade their pollution in an open market. Over the past 15-plus years, many international donor organizations have promoted to domestic environmental regulators the market-based policy instruments that in the United States are used to control emissions of sulfur dioxide (SO2) and nitrogen oxides.

Compared with conventional methods of stemming pollution, an environmental credits market puts even higher demands on infrastructure and regulatory systems. In the United States, the market approach succeeds only because it is implemented in a way that is far from laissez-faire. Its basic regulatory demands—a steady decrease of emissions over time—are nonnegotiable. As plants that can cut emissions with relative ease sell their incremental pollution reductions to other plants that cannot do as well, transactions are regulated down to small details and vigorously enforced. The United States requires every plant in the domestic SO2 trading system to install a special (and expensive) form of equipment called a Continuous Emissions Monitor, which sends real-time data via computer to Environmental Protection Agency (EPA) headquarters in Washington. Traders must use elaborate accounting measures and work in such complete transparency that transactions are tracked on the EPA Web site.

It should come as no surprise that emissions trading has rarely, if ever, taken hold in other countries to which it has been touted. Emissions are the currency of the environmental trading system, but many highly industrialized countries such as China, Russia, and many of the other countries of the former Soviet Bloc do not have adequate monitoring equipment to detect what pollutants, and in what amounts, particular factories and power plants are spewing into the atmosphere. They have weak environmental enforcement systems and cannot really say whether particular plants comply with environmental requirements.

Now along comes the Kyoto Protocol, which seeks to apply a trading scheme to CO2 emissions on a global scale. To accomplish this would require competence and skills both within countries and between them. Not only must countries institute the same combination of laws, supervision, and verification as they would for conventional ways of controlling pollution, their industries must also have an incentive to trade and the analytic tools to determine whether trading will bring them anything of value. This is brand-new territory for many governments and their regulated industries.

Even a seemingly straightforward task such as planting a forest to act as a CO2 sink will strain the competencies of developing countries. For the forest to offset continuous CO2 releases in some other part of the world, the trees will have to survive, thrive, and avoid being cut down for firewood or commercial use. Someone must track the forest’s capacity to absorb CO2 and, as with any activity to reduce CO2, guarantee continuous reductions, day in and day out, over many years. This demands a sustained attention to environmental performance that is notably lacking in much of the world.

Yet the Kyoto Protocol assumes that critical countries such as India, China, and the nations of the former Soviet Bloc can generate verifiable credits into the system—that they can reliably reduce their production of CO2 and sustain these actions over considerable periods of time. It seems unlikely that many countries will hold up their end of the bargain.

Law and disorder

Any country can pass environmental laws. The hard part is to animate the laws through compliance and enforcement— a challenge that requires, if not a robust legal system, at least a working and reliable one. Law sets the rules, establishes processes for enforcement, and provides recourse for parties who believe they have been cheated or denied what they bargained for.

For many of the countries that are necessary participants in a trading system, law in general (not only environmental law) has a troubled history. In the countries of the former Soviet Bloc, law effectively was what suited the needs of powerful leaders. In China, a huge and growing emitter of GHGs, historical experience with written laws to manage complex relationships is very shallow. Commercial relationships were built for centuries on personal assessments of the trustworthiness of a trading partner, not on contract law.

Survey the world, and few countries can demonstrate dependable legal systems and an independent judiciary ready to stand behind contracts such as environmental trading agreements. In India, which does boast a working legal system, the independent and respected supreme court has occasionally stepped in to force government agencies and individuals to implement environmental laws and policies that would otherwise have languished. Even so, the lower courts are notoriously slow and unreliable and are dogged by allegations of corruption.

And even where there is a will to prosecute tough cases against cheaters, facts are not so easy to find in societies without strong traditions of transparency and information access. Bringing a case is even harder when one party to this transaction is a state-owned enterprise that is clearly more powerful than the regulatory body that supposedly supervises it, or when the ultimate beneficiary of the sale of emission credits is the party in power. When the scale of the regulatory effort is global, no world court exists to litigate the trustworthiness of the pollution reductions that become emission credits.

If trading GHGs were a routine commercial transaction, the normal solution would be to punish the wrongdoer or compensate the loser. This is not a normal transaction. Unchecked releases of GHGs impose their injury on the public, but the public is rarely party to the deal, except through the watchful eyes of government regulators. Faked GHG reductions do a kind of damage that cannot be fixed with conventional remedies such as fines or jail time.

Environmental trading programs, whether their purpose is to reduce domestic pollution or to manage global GHGs, present special logistical and conceptual problems that go well beyond the normal challenges of conventional environmental regulation. In normal regulation, someone (a lawmaker or regulator) places a specific numerical limit on the amount of pollution that a plant can emit. Someone else is designated to monitor whether the goal is being met. Even this elementary monitoring requires a combination of equipment, vigilance, and enforcement that that few countries can provide.

Trading takes these requirements to a new level of difficulty. Not only will legal limits vary widely (it is entirely possible, in fact, that every factory might legally emit at a different level from its peers, creating a logistical nightmare for enforcement officials), but what is being traded is an invisible, intangible commodity: the right to emit a given amount of CO2.

Trading so abstract a commodity demands a highly sophisticated understanding of property rights and of the role of law in supporting those rights. Issues of ownership—even basic comprehension of what it means to be an owner—and of contract rights and obligations are paramount. If a society manifests confusion about the ownership of certain kinds of tangibles, as is often the case in countries emerging from state socialism, imagine how much more difficult it is to sort out and document the ownership of future rights to gas emissions from a factory. And that is just when the factory is acting in good faith.

What about the cheaters? Keeping companies honest is hard enough in a robust legal and regulatory environment, as Enron’s sham energy trades and WorldCom’s balance-sheet fraud amply demonstrated. In a weak legal system, the potential for emissions trading fraud is enormous.

Finally, there is the issue of motivation to participate. The theory behind trading is that factories, power plants, and anyone else that generates CO2 will be eager and capable partners in deals to buy and sell emissions. Nothing seems more obvious to those of us raised in the Western economies.

But the theory rests on three faulty assumptions. The first is that industry wants to save on compliance costs. Where pollution laws have been nothing more than paper, industry knows it need not worry much about environmental compliance; these things can be worked out. Plants that aren’t being forced to comply with environmental requirements may not see the point in cutting compliance costs through elaborate trading regimes. It does not buy them anything more than they already have, which is a free ride to pollute. No one has yet demonstrated why industry or regulators are likely to take GHG reductions any more seriously or be more effective in regulating them.

The second assumption is equally intuitive and unfortunately wrong: that the opportunity to trade will reveal a natural instinct to make a profit and to do so in the most efficient way possible. Even questioning this seems preposterous in the frame of reference we bring from the Western economies. But in much of the world, efficiency and profit are secondary to production or full employment goals, and failing companies continue to be kept afloat by soft budgets: essentially government bail-outs.

As reluctant as we may be to acknowledge this, counting profit and loss may be a challenge to managers of enterprises in some parts of the world. In the Western economies, accounting tools let managers know whether they have turned a profit. But for plant managers in the Soviet Bloc, accounting was a way to understand whether they met production goals set by party bosses. Nikita Khrushchev’s economic reforms introduced the concept of “profit” as a planned category, calculated as a fixed percentage of cost. Old habits die hard, and numerous observers have noted that fundamental conditions remain pretty much the same, even though the economy and enterprises have been formally privatized.

The third weak assumption behind emissions trading is that even if plants around the world are not themselves motivated to embrace clean technologies, they will accept them when offered in the context of Kyoto’s flexible mechanisms. Certainly, any factory in any part of the world can recognize that someone offering free equipment is offering something of value. The tricky part is whether the manager of that plant has any incentive to turn the equipment on and pay its running costs, to keep it running night and day, day in and day out, and to clean it from time to time. Normally, none of this happens without a watchful eye in the form of disinterested enforcement.

In short, no trading system can operate independently of the prevailing culture. This is equally true in Europe, where market incentives are foiled by deeply rooted traditions of government intervention, by the relationship between government and industry, and by each nation’s unique political heritage. And it is certainly true in any country of the developing world, as a recent report by India’s Center for Science and Environment (CSE) confirms. CSE looked at two active CDM projects and concluded that it is impossible to check whether the transactions meet Kyoto standards because their terms are not transparent; that the projects may have been approved by Indian authorities on the basis of the prestige of the consultant that validated the projects rather than the projects’ merits; and that certain conditions of the transactions are yet to be met, despite being specified in the project design document. CSE questioned whether the process or the results contributed to genuine sustainable development or the purposes of GHG reduction.

Much rides on the Kyoto Protocol’s frail shoulders. The disruptions caused by melting icecaps and flooding may prove more severe than those produced by war, and longer lasting. Experts predict mass exoduses as entire populations seek higher ground. Shifting water currents might change our food supply and how we lead our lives. Conflict is inevitable.

The flexible mechanisms of the Kyoto Protocol and the promise of technology offer intriguing but ultimately unconvincing answers to this looming problem. Trading and technology will play a role in taming climate change, but they are only a piece of the solution, not the entire answer. A strong dose of realism is past due. Even granting that cap and trade can be a model, attention must shift to the cap and how to make it work.

GHG emissions cannot be wrestled into control without the same hard work that must be applied to control any other pollution threat. The first step must be a genuine commitment, country by country, starting with the United States, to capping and then reducing GHG emissions. Experience indicates that this can happen only by instituting independent regulation and enforcement. What this will mean as a practical matter may differ from country to country, but overall the goal must be to identify what can be done to build the developing world’s ability to ensure more reliable compliance, monitoring, and enforcement.

Alternatively, we could accept the view of some experts that we are already at the point of no return. The economist Thomas Schelling, among others, has suggested that the best option now is to build the survival and adaptation capacity of countries that will be disproportionately hit, so that low-lying or otherwise vulnerable nations will not have to pay an excessive price for the failure of the world to grapple with the challenge of climate change.

But any of these measures, whether preventive or palliative, requires an unusual steadiness of purpose, political will, and a longer view than we seem capable of mustering. Like the frog that feels the gradually warming water only when it is too hot to survive, human beings are apparently lulled by the gradually warming atmosphere into the false hope that we have many years ahead to deal with the problem. This illusion has some parallels with terrorism, where warnings were available for years to anyone who wanted to listen, but it took September 11 to move the issues to the front of the queue. Even though the consequences of ignoring global warming could be more chilling than parcel bombs detonated in the Western capitals, we are putting our eggs into a theoretical basket constructed from vain hopes that problems like this will essentially take care of themselves.

Restoring Rivers

Between 1973 and 1998, U.S. fresh waters and rivers were getting cleaner. But that trend has reversed. If the reverse continues, U.S. rivers will be as dirty in 2016 as they were in the mid-1970s. Water quality is not the only problem. In parts of the United States, the extraction of surface water and groundwater is so extreme that some major rivers no longer flow to the sea year round, and water shortages in local communities are a reality.

The damage and suffering wrought by Hurricane Katrina demonstrate that the restoration of waterways and wetlands is not a luxury. It is a national imperative. And the imperative does not apply only to hurricane-prone coastal waterways. More than one-third of rivers in the United States are impaired or polluted.

The flood-storage capacity of U.S. rivers is at an alltime low. Water shortages are increasingly common even in eastern states that historically have had plenty of it. Aquatic wildlife is going extinct at a rate much higher than organisms in either terrestrial or marine ecosystems. In June 2004, drinking water in Wisconsin was found to exceed levels of nitrate considered safe. In July 2005, scientists reported that high levels of nutrients and sediments from river tributaries had created a dead zone that blanketed a third of the Chesapeake Bay. In October 2005, homes in Connecticut, New York, and New Jersey were flooded when rivers overflowed their banks after a week of rain. All three of these disasters are linked to the degradation of rivers and streams. And all three of them could have been prevented by ecological restoration.

River restoration means repairing waterways that can no longer perform essential ecological and social functions such as mitigating floods, providing clean drinking water, removing excessive levels of nutrients and sediments before they choke coastal zones, and supporting fisheries and wildlife. Healthy rivers and streams also enhance property values and are a hub for recreation. Clearly, degraded rivers and streams need to be repaired.

However, just as rivers are in need of restoration, so too are the art and practice of restoration itself. A recent study (in which we participated) published in Science documented a huge number of restoration projects being implemented in every region of the country at great cost and for a variety of reasons. The projects range from land acquisition (at a median cost of more than $800,000), to bank or channel reshaping to restore floodplains (median cost more than $200,000), to keeping livestock out of rivers and streams (median cost $15,000). However, distressingly few of them—just 10% of all restoration project records in the database put together by the National River Restoration Science Synthesis (NRRSS)—included any mention of assessment or evaluation. The study concluded that it is currently impossible to use existing databases to determine whether the desired environmental benefits of river restoration are being achieved. Even when monitoring was reported, it typically was an assessment of project implementation, not ecological outcomes.

The nation can do better. The United States needs regulatory and legislative federal policy reforms in order to improve the effectiveness of river restoration and thus the health of the nation’s waterways.

How did the United States reach a point where the majority of our rivers are degraded and ecologically dysfunctional? People have always chosen to live and work near water. Cities and industrial facilities began to grow up along U.S. waterways centuries ago, and for most of U.S. history, dilution was the solution to pollution. U.S. streams and rivers were the dumping grounds for waste, and the hope was that the waste would be carried away.

Settlers also cut down riparian forests and filled in small tributaries and wetlands to make transportation and building easier. There was little understanding of the ecological roles that these forests and tributaries fill. In the first half of the 20th century, massive dams were erected with the goal of supplying power and minimizing floods. They did accomplish those objectives, but damming also led to the loss of water-starved native plants and animals downstream. They could not survive and reproduce without the seasonal changes in flow that the river had always brought them and that their life cycles depend on.

With increasing industrialization and population growth, cities and industries not only continued to dump raw sewage and other wastes into streams and rivers, but also “paved” many U.S. streams—lined them with concrete so they could convey water and pollutants more rapidly. Streams were viewed as pipelines, not the living entities we now know they are. Forgotten was their ability, when healthy, to cleanse water, store sediments, and provide materials essential to healthy coastal fisheries.

The crisis came in the 1960s, when it became known that two-thirds of U.S. waterways were polluted. In 1972, the Clean Water Act (CWA) was passed. Since then, U.S. rivers and streams have become healthier, largely because of controls over point-source pollution. Then in 2004, for the first time since the act was passed, the Environmental Protection Agency (EPA) reported that waterways were once again getting dirtier.

The primary reason why so many rivers and streams are still being degraded today is poor land stewardship. Human activities and alterations of the landscape have diverse and far-reaching effects. As land is cleared to build homes and shopping malls, entire watersheds are affected. Construction and the erosion of farmland introduce massive amounts of sediment into streams. Many streambeds are covered by heavy layers of silt. This silt suffocates fish eggs and invertebrates living on streambeds, destroys aquatic habitat, and even can interfere with the treatment of drinking water. Agriculture and urbanization move excessive amounts of nutrients and toxins from the land to rivers, streams, and coastal waters.

ONE OF THE MOST PERVASIVE REASONS FOR RESTORATION FAILURE IS THE IMPLEMENTATION OF A PROJECT AT ONE POINT ALONG A STREAM WITHOUT KNOWLEDGE OF UPSTREAM CONDITIONS.

When land is cleared and replaced with hard surfaces such as parking lots and rooftops, stream flows are governed primarily by overland runoff or inputs from stormwater systems, not by the interaction of local climate, vegetation, and soil characteristics. Rainwater no longer soaks into the soil before it moves underground toward stream channels, the normal route for water flow in healthy temperate watersheds. Rapid runoff prevents replenishment of the groundwater table and results in “flashy” stream flows. Flashiness means more floods that can damage nearby property as well as flows so low in the summer that entire channels dry up. Stream and river banks begin to scour and slump. Channels widen and deepen. If the riparian vegetation has been removed, which is true for a huge number of streams and most rivers in the United States, erosion worsens. Moreover, the large amount of pavement in urban watersheds retains a lot of heat in the summer, so the flashy flows that follow heavy rains inundate the streams with pulses of warm water, destroying fish and bottom-dwelling organisms.

How has the United States tried to solve these problems? Are the solutions working? It is not as if nothing has been done. The CWA went a long way toward minimizing point-source inputs of pollutants to rivers. Unfortunately, rapid changes in land use and the many effects that urbanization and agriculture have had on rivers and streams are not as easy to remedy as point-source discharges.

Attempts have been made to minimize those effects. For example, the Conservation Reserve Enhancement Program of the Department of Agriculture’s Farm Service Agency paid farmers to participate in long-term conservation projects such as planting riparian buffers on their property or keeping land out of agriculture. In recognition that more diffuse sources of pollution of waterways have become increasingly common, the EPA recently adopted stricter standards for control of stormwater runoff. This is Phase II of the National Pollutant Discharge Elimination System, which extends permitting regulations to smaller population centers. These regulations require communities and public entities to develop, implement, and enforce a stormwater program designed to reduce the discharge of pollutants. In addition, many cities and towns enacted their own regulations to slow the clearing of land, to require that only a minimal number of trees be removed during construction, or to mitigate damage to forests and wetlands.

Despite these and many other efforts to minimize the environmental impact of developing the land or extracting natural resources (such as mining), streams and rivers have continued to degrade. The controls have simply not been able to keep up with the rate of development and associated watershed damage. Moreover, many rivers and streams were suffering years before conservation programs were enacted.

River and stream restoration thus grew out of the recognition that active interventions and aggressive programs were needed to improve the health of U.S. waterways. There are many ways to restore rivers, and they vary depending on the underlying problem. The most common goals of river and stream restoration are to improve water quality, manage or replant riparian vegetation, enhance in-stream habitat, provide for fish passage, and stabilize banks. Practices for accomplishing these goals are diverse and overlapping.

For example, bank stabilization can be achieved in a number of ways, including riparian plantings, wire baskets filled with stones, large slabs of concrete, and rope netting. Improving water quality may involve enhancing upstream stormwater treatment and planting vegetation along stream banks. Enhancing habitat and improving fisheries may require adding logs or boulders to streams, constructing ladders for migrating fish that cannot pass dams, or reconnecting a river floodplain to its channel to provide spawning habitat or nursery grounds for young fish.

Restoration activities such as these are now common in the United States. For example, in Kentucky, the Lexington-Fayette Urban County Government has an annual Reforest the Bluegrass day in which thousands of trees are planted along local streams in order to improve water quality and habitat for aquatic life. In suburban Maryland, restoration of a stream in the Paint Branch watershed involved installation of a bypass pipe that redirected warm stormwater coming from a subdivision. This kept the stream water cool, eliminating a thermal barrier to trout, while reducing peak flows during storms. Near Albuquerque, New Mexico, portions of the west bank of the Rio Grande were cleared of the invasive Russian olive plant, and land adjacent to the river was lowered to allow water to flow over the bank during spring snowmelt. This has helped maintain native vegetation and created a functional floodplain.

These examples are among the river restoration success stories. Unfortunately, we also know of many, many failures. In Maryland, an effort to reconfigure a stream channel using bulldozers and artificially created pools resulted in flooding. Fixing the problem involved straightening the stream channel to restore its prior form as well as the large expenditure of money and time. In California, large amounts of gravel are added to rivers every year to provide spawning habitat for salmon, but it is not clear whether this gravel remains in place or that salmon populations are increasing. In the Midwest, sand traps are dug along agricultural fields to prevent silting and eutrophication of adjacent streams, yet large amounts of nutrients still move down the Mississippi to the Gulf.

One of the most pervasive reasons for restoration failure is the implementation of a project at one point along a stream without knowledge of upstream conditions. If serious upstream problems are not addressed, riparian replanting or channel stabilization projects that are implemented downstream are likely to fail.

These failures and the fact that the health of coastal areas such as the Chesapeake Bay continues to decline despite thousands of stream and river restoration projects demonstrate that something is wrong with U.S. restoration policies. The health of U.S. waterways is not improving fast enough despite the fact that the number of projects in the United States is increasing rapidly. The country is now spending well over $1 billion per year on river and stream restoration, and it is not getting its money’s worth.

The problem is that there are no policies to support restoration standards, to promote the use of proven methods, or to provide basic data needed for planning and implementing restoration. Although much is known about effective restoration, this information has not been used in most projects or policies. For this reason, many restoration efforts fail: Stream banks collapse, pollutants from upstream reaches that were never considered for restoration overwhelm newly restored sites downstream, channel reconfiguration projects that were overengineered are buried in sediment, and flood waters flow over river banks.

What to do? First, government officials must deal with the fact that most restoration projects are being done piecemeal, with little or no assessment of ecological effectiveness. They do not even know which of the various restoration approaches are most effective. In addition, there is little coordination among restoration plans and projects in most watersheds.

Second, because there are no national standards for measuring success in restoration, there is no system for evaluating how effective projects are. Watershed managers and restoration practitioners have little guidance for choosing among various restoration methods to ensure ecological improvements.

Third, the United States has no national tracking system that gathers basic information on what is done where and when. Thus, there is no way to prioritize projects based on what is being done elsewhere or what is known to work.

The solution to pollution is to reform federal, state, and local policies. Here we address the federal level because of the critical role that federal policies play in funding and permitting restoration projects. Different regulations and laws are needed in four areas.

Federal agencies must be directed to adopt and abide by standards for successful river and stream restoration. Progress in the science and practice of river restoration has been hampered by the lack of agreed-on criteria for judging ecological success. The restoration community—which includes legislators, agencies, practitioners, and citizen groups—should adopt common criteria for defining and assessing ecological success in restoration. Success can also be achieved through the involvement of stakeholders and learning from experience, but it is ecological success that will improve the health of U.S. waterways.

Five basic standards have been recommended by an eminent team of U.S. scientists and engineers and endorsed by an international group of river scientists as well as by restoration practitioners. The standards are:

  • The design of a river restoration project should be based on a specific guiding image of a more dynamic, healthy river.
  • The river’s ecological condition must show measurable improvement.
  • The river system must be more self-sustaining and resilient to external perturbations, so that only minimal followup maintenance is needed.
  • During the construction phase, no lasting harm should be inflicted on the ecosystem.
  • Both pre- and post-assessments must be completed and data made publicly available.

Simple metrics are already available that can be applied to each standard, so implementing the standards would not be difficult. If federal agencies involved in funding river restoration adopted these standards, it would go a long way toward ensuring that projects meet their stated ecological goals.

Congress should ensure that restoration projects are credible by requiring recipients of federal funds to adhere to the standards for ecologically successful restoration projects. Several notable authorization bills that govern river restoration at the federal level are the Water Resources Development Act (WRDA), the farm bill, and the transportation bill. These bills can be used to establish requirements for monitoring outcomes and tracking agency performance. Authorization bills can thereby direct money increasingly toward effective strategies. Appropriations bills can also require agencies to adopt standards. These bills often include language that directs an agency to take specific action. Requiring an agency to adopt particular standards for implementing ecological restoration projects could also be included in an agency’s annual appropriations.

Regardless of the mechanism, requiring standards that ensure better tracking and evaluation of restoration project outcomes will enhance not only individual projects but also others that follow. Accountability is essential.

Federal standards and guidelines that have implications for river restoration should be amended to incorporate the five proposed basic standards. This should include the revision of restoration manuals and design criteria. Formal policies, guidance documents, and manuals include not only those dealing with the design and implementation of projects, but also those offering general guidance or addressing restoration planning or oversight. Although amending of statutes or regulations may be ideal and in some cases required to inject basic standards for ecological restoration, amending present practices and procedures also will be helpful. Because a number of agencies provide funding for river restoration, often with agency-specific areas of emphasis, it may be preferable to assign responsibility for the development of standards and guidelines to individual agencies. However, all should comply with the five goals listed above, and mechanisms must be developed to ensure interagency coordination and common reporting.

For example, the U.S. Army Corps of Engineers undertakes a wide array of activities that affect rivers and streams, including river restoration. Like most agencies, the Corps is supposed to adhere to statutes and the interpreting regulations that govern these activities to protect and restore environmental quality, among other public interests. The Corps, like other agencies, is further governed by standards and guidelines. These are outlined in a series of engineer manuals that provide general guidance on how to undertake various activities, including, for example, how to construct flood-control projects, how to stabilize riverbanks, and how to manage water releases from dams. Because of the significant increase in the amount and frequency of the Corps’ involvement in river restoration projects, the agency should publish a new engineer manual that outlines acceptable practices and specifications for river restoration. Those should include some codification of the five basic standards for ecological success described above or something similar. Other agencies—for example, the Department of Agriculture and the Natural Resources Conservation Service—use similar guidance documents. They also should undertake similar revisions.

In addition, new restoration funding programs should be formed and some of the existing ones reformed. There is a critical need for federally funded programs that focus on specific regions and serve as model programs in balancing both human and ecosystem needs to maximize the restoration of services that healthy rivers provide.

To incorporate key elements of sound science into river restoration, projects must include goal- or hypothesis-driven restoration projects that place a high priority on monitoring, develop restoration designs that minimize environmental impacts, and demonstrate ecological improvement. These key elements should be incorporated into the internal policies and guidelines of the new or reformed restoration programs and be met before grants or contracts to fund restoration are awarded. This will require programmatic tools to assist groups in effective project implementation and compliance with environmental, monitoring, and reporting regulations. Existing programs that attempt to do what we have outlined include the CALFED Bay-Delta Ecosystem Restoration Program and the Chesapeake Bay Program.

A coordinated tracking system for restoration projects must be implemented. If the restoration of aquatic ecosystems is to be effective, restorers must be able to learn from past efforts. At present there is no coordinated tracking of river restoration projects. Existing federal databases are highly fragmented, often relying on ad hoc or volunteer data entry. They are inadequate for evaluating even the most basic trends in river restoration. Bernhardt et al. found that these national databases cataloged less than 8% of the projects in the NRRSS database. They could not be used to evaluate regional differences in restoration goals, expenditures, and assessment.

Agencies at all levels of government that share jurisdiction over rivers and streams are engaging in restoration projects of various kinds. Legislators who authorize projects and appropriate funds are also taking greater notice of the opportunities presented by river restoration projects that can drive funding toward their districts and constituents. Thus, there is an urgent need for a centralized tracking system that catalogs every stream and river restoration project implemented in the United States. This should include at least the following information for every project: Geographic Information Systems coordinates; spatial extent, intent, and goals; a catalog of project actions; implementation year; contact information; cost; and monitoring results.

Restoration monitoring presents its own suite of challenges. Implementation monitoring—determining whether a project was built as designed, is in compliance with permit requirements, and was implemented using practices that had been vetted by effectiveness-monitoring research programs—is not carried out routinely. This monitoring should be required of all projects and included in the centralized database. This ensures compliance with permits and with stated intents and requires that monitoring be part of project planning and designed in light of project goals.

Effectiveness monitoring involves an in-depth research evaluation of ecological and physical performance to determine whether a particular type of restoration or method of implementation provides the desired environmental benefits. Because effectiveness monitoring is time-consuming and expensive, it is unrealistic to make this a routine expectation. However, it is necessary when comparing the effectiveness of different restoration approaches, when evaluating unproven restoration practices, or when the ecological risks of a project are considered high.

The national tracking of restoration projects, including implementation and effectiveness-monitoring information, will ensure that projects are chosen wisely and that money is spent carefully. The EPA is a good candidate because of its role in overseeing compliance with the CWA and involvement in ecological monitoring. The U.S. Geological Survey is also a good candidate because of its involvement in water science and stream-flow monitoring. Regardless of which agency houses the national database tracking stream restoration, it is essential that individual agencies and other reporting entities maintain compatible databases with an internal structure that allows easy merges and “drop-down” boxes to ensure reporting in standardized categories.

Undertake a national study to evaluate the effectiveness of restoration projects. Because restoration effectiveness has not received adequate attention, it is not always clear which restoration methods are most appropriate or most likely to lead to ecological improvements. Agency practitioners often rely on best professional judgment that their projects are meeting intended ecological goals, rather than undertaking scientific measurement and evaluation. Only a small fraction of projects are currently being monitored to determine their relative success, so little is known about the environmental benefits. Something must be done to ensure that projects are doing what they set out to do and that money is well spent.

Although monitoring of project effectiveness in meeting ecological goals is always desirable, not every project requires sophisticated and costly effectiveness monitoring. In fact, many people worry that such an expectation would diminish the number of restoration projects on the ground by siphoning off available resources.

One way to balance the need for evaluation and accountability with limited resources is to conduct detailed monitoring of a sample of projects. The information gained would provide an efficient means of understanding project effectiveness and help restorers learn from the experience of others. Such a program could involve detailed monitoring of a sample of all projects within each of the major categories of river and stream restoration, perhaps beginning with the most interventionist restoration practices (such as channel reconfiguration) or the most costly forms of restoration (such as floodplain reconnection).

CONGRESS SHOULD ENSURE THAT RESTORATION PROJECTS ARE CREDIBLE BY REQUIRING RECIPIENTS OF FEDERAL FUNDS TO ADHERE TO THE STANDARDS FOR ECOLOGICALLY SUCCESSFUL RESTORATION PROJECTS.

In 2002, Congress included language in a Department of the Interior appropriations bill that directs the National Research Council (NRC) to examine federal and nonfederal water resources programs and provide recommendations for a national research program that would maximize the efficiency and effectiveness of existing programs. The NRC has completed several reports that evaluate water research programs and is considering a study on river restoration. We recommend that such a study panel be funded immediately and directed to recommend the design, scope, and costs of a new research program. The program should explicitly evaluate which restoration methods are most effective at achieving the desired goals and delineate which project types require only modest compliance monitoring and which need detailed monitoring.

Because the data are currently insufficient to enable an NRC study committee to reach firm conclusions through synthesis of existing data and expert analysis, the panelists should be directed to identify needed further research. Furthermore, the panel should identify the appropriate agency to oversee this research program in order to ensure that peer review of all research is conducted and that restoration effectiveness data are collected by an entity that is independent from those conducting or funding restoration. It is standard practice to conduct double-blind studies when human health is concerned. It is no less important when the future of clean water and freshwater resources is at risk.

Use existing funding for river restoration more efficiently and supplement funding. Although there are many areas in which Congress and federal agencies could make improvements in the policy and practice of river restoration, it is first necessary to ensure that existing funding is wisely allocated so that projects are successful. That means developing a mechanism to authorize and fund restoration projects in a much more coordinated fashion than the balkanized system that supports them today. There are more than 40 federal programs that fund stream and river restoration projects. Although large-scale high-profile projects such as those in the Everglades receive a great deal of attention, most projects in the United States are small in spatial extent. The cumulative costs and benefits of the many small restoration projects can be very high, which argues for better coordination.

We suggest that a Water Resources Restoration Act (WRRA) could serve as a mechanism to authorize and fund river restoration projects. Like WRDA, WRRA would support projects of various shapes and sizes, all for the purpose of making federal investments in natural capital and infrastructure. Money would still flow through individual agencies, but prioritization and coordination would be achieved through an administrative body with representation from all agencies that fund river restoration activities. That body would ensure that restoration funds are spent efficiently as well as address unmet needs. These projects would yield enormous benefits in the form of ecosystem services, including flood control, protection of infrastructure, and maintenance of water quality. They also would have benefits similar to those of more traditional infrastructure projects. They create jobs in member districts, but if they are carefully chosen and designed, they can also save taxpayer money. By including interagency tracking mechanisms and building in compliance monitoring requirements to each project, Congress can ensure the necessary feedback and accountability to make these projects wise investments. Instead of funds being allocated independently by 40-plus federal programs, WRRA would ensure that project prioritization occurs on watershed scales and is based on criteria that are consistent across the nation.

In addition to the need for better coordination and thus more efficient use of existing funding, current funding falls short of what is needed. The magnitude of the problems and the demands that citizens are making for healthier waters require additional funds for cleaning U.S. rivers and streams. Aging sewer and stormwater infrastructure combined with increased development of the land make it imperative that a combined approach involving better coordination and an increase in funding be a priority. Additional funding will not only make possible more recovery of damaged river ecosystems, but will enable inter- or intraagency mechanisms for tracking projects and allow more pre- and post-project monitoring of their effectiveness. New funding will not be easy to come by in the current budget climate and with increased competition for investments in water quality. Many federal programs that involve river restoration are being cut, not increased. The growing need for upgrading stormwater and sewer infrastructure goes hand in hand with river restoration; one cannot replace the other. Only together will they accomplish the goals of improved water quality, more productive fisheries, and the restoration of other services that rivers provide.

River restoration is a necessity, not a luxury. U.S. citizens depend on the services that healthy streams and rivers provide. People from all walks of life are demanding cleaner, restored waterways. Replacing the services that healthy streams provide with human-made alternatives is extremely expensive, so river restoration is akin to investments such as highways, municipal works, or electric transmission. Congress already commits billions of taxpayer dollars in public infrastructure through the transportation bill or WRDA. It should make similar investments in natural capital.

Much can be accomplished by allocating scarce resources and prioritizing efforts based on sound policies that ensure that the most effective methods are applied and that agreed-on standards are adhered to. Changes in agency policies and practices require overcoming bureaucratic inertia and confronting competing constituencies. Instituting tracking systems and comprehensive studies of project effectiveness requires cooperation among multiple agencies, scientists, environmental groups, and affected industries. But with congressional oversight and wise appropriation of scarce dollars, U.S. rivers and streams can once again flow clear and clean.

Brain Mobility

The high level of participation of international scientists and engineers in U.S. laboratories and classrooms warrants increased efforts to understand this phenomenon and to ensure that policies regarding the movement and activities of highly trained individuals are sufficiently open and flexible to keep pace with the changing nature of research and technology.

Foreign-born students and scholars contribute at many levels—as technicians, teachers, and researchers and in other occupations in which technical training is desirable. They have also been shown to generate economic gains by adding to the processes of industrial or business innovation. As scientists and engineers become increasingly mobile, their activities will be an element in international relations and even foreign policy.

To maintain its leadership position in science and technology, the United States will have to do more to understand this global network of expertise, to provide a learning and research environment that attracts scholars, to prepare U.S. and foreign-born students alike to function in the evolving global research system, and to implement immigration policies that facilitate the migration of talented individuals so that they can contribute most effectively to global well-being.

Despite the growing presence of international science and engineering graduate students and postdoctoral scholars on U.S. university campuses, the data gathered by different sources on their numbers and activities are difficult to compare and yield only an approximate picture of their career status and contributions. The data presented below (which are taken from the National Academies report Policy Implications of International Graduate Students and Postdoctoral Scholars in the United States) provide a useful introduction to the subject, and they should serve as a catalyst for expanded U.S. efforts to study and welcome the emerging international network of scientists and engineers.

Growing foreign presence at universities

The percentage of international science and engineering (S&E) graduate students in U.S. universities grew from 23.4% in 1982 to 34.5% in 2002. Their presence was particularly strong in some fields. In 2002 international students were 35.4% of all graduate students in the physical sciences and 58.7% of those in engineering. The S&E postdoctoral population was even more international, with almost 60% coming from outside the United States. Information about this group is very limited.

Full-Time S&E Graduate Enrollment by Citizenship

Source: National Science Foundation.

Academic Postdoctoral-Scholar Appointments in S&E

Source: National Science Foundation.

A long period of U.S. dominance

Since the end of World War II, the United States, with 6% of the world’s population, has been producing more than 20% of the world’s S&E PhDs. The strength of the U.S. S&E enterprise is unlikely to falter in the near future, but over the longer term the United States faces challenges in maintaining its leadership.

S&E Doctorate Productivity by Country, 1975–2001

Source: National Science Board.

Signs of a new scientific order

Beginning in 1997, the 15 leading countries of the European Union have published more scientific articles than has the United States. U.S. articles are still 250 the most cited, but European scientists are also closing that gap. Perhaps the most important development is in international collaboration. The percentage of articles with authors from more than one country grew from 8% in 1988 to 18% in 2001. U.S. scientists participated in the majority of these collaborations, and it will be increasingly important for them to maintain these international relationships.

Authorship of Scientific Articles by Country, 1988–2001

Source: National Science Board.

Will Government Programs Spur the Next Breakthrough?

The future health of the U.S. economy depends on faith: the faith that a new general-purpose technology will emerge that will enable the tech-savvy United States to maintain its pace of rapid productivity growth. In the 20th century, these technological breakthroughs—jet aircraft, satellite communications, computers—always seemed to emerge magically when they were needed. Why should we not continue to believe in the magic of human ingenuity?

Although human ingenuity is indeed a wonder, a closer look at the history of the emergence of new technologies reveals that government R&D spending played an important role in the development of almost every general-purpose technology in which the United States was internationally competitive. In particular, defense-related research, development, and procurement played a pervasive role in the development of a number of industries—aircraft, nuclear power, computer, semiconductor, Internet and satellite communication, and Earth-observing systems—that account for a substantial share of U.S. industrial production.

Identifying this force behind the magic would be reassuring were it not that changes in government policy have reduced the type of federal R&D that spurred technological breakthroughs. At the same time, the private-sector R&D operations, such as Bell Labs, that performed much of this defense R&D as well as supporting their own long-range R&D have refocused their efforts on incremental technology improvements with a shorter-term payoff. The result is that although one can always maintain the hope that magical technological progress will occur, the government and industrial investments that made previous breakthroughs possible are shrinking. Human ingenuity is an abstract concept that will always be with us, but technological innovation is a more mundane activity that requires financial resources as well as inspiration. It is not obvious where the resources to propel the cutting edge of innovation will come from. And without that innovation, it is obvious that the United States will not be able to maintain the rate of productivity growth necessary to sustain its global economic leadership.

Technological maturity

After initially experiencing rapid or even explosive development, general-purpose technologies often experience a period of maturity or stagnation. One indicator of technological maturity has been a rise in the scientific and technical effort required to achieve incremental gains in a performance indicator. In some cases, renewed development has occurred along a new technological trajectory.

Measurable impact of a new general-purpose technology on productivity in an industry or sector often does not occur until a technology is approaching maturity. Nobel economist Robert Solow famously commented only a decade ago that he saw computers everywhere except in the productivity statistics.

The electric utility industry is a classic example. Although the first commercially successful system for the generation and distribution of electricity was introduced by Thomas A. Edison in 1878, it was not until well into the 20th century that the electrification of factories began to have a measurable impact on productivity growth. Between the early 1920s and the late 1950s, the electric utility industry was the source of close to half of U.S productivity growth.

Electric power generation from coal-fired plants reached technological maturity between the late 1950s and early 1960, with boiler-turbine units in the 1,000-megawatt range. The exploitation of renewable energy resources or development of other alternative energy technologies could emerge over the next several decades as a possible new general-purpose technology. However, none of the alternative technologies, including nuclear power, appear at present to promise sufficient cost reduction to enable the electric power industry to again become a leading rather than a sustaining source of economic growth in the U.S. economy.

Aircraft production is an example of an industry in which a mature technological trajectory was rapidly followed by transition to a new technological trajectory. Propeller aircraft reached technological maturity in the late 1930s. The scientific and technological foundation for a transition to a jet propulsion trajectory was well under way by the late 1930s, but the transition to commercial jet aircraft would have occurred much more slowly without military support for R&D during World War II and military procurement during the Korean War. Thanks to these government efforts, the industry boomed in the 1950s and 1960s. Growth reached a plateau in 1969, when the launch of the Boeing 747 marked the technological maturity of commercial jet transport.

A similar story can be found in computer development. By the late 1960s, there were indications that mainframe computer development was approaching technological maturity, but new trajectories were opened up by the development of the microprocessor. The personal computer replaced the mainframe as the most rapidly growing segment of the computer industry and as an important source of output and productivity growth in the U.S. economy.

However, support from defense and space agencies contributed to continuing advances in supercomputer speed and power for high-end scientific and military use into the early 1990s. By the late 1990s, substantial concern was being expressed about the sources of future advances in all computer performance.

A continuing concern in the field of computers and information technology (IT) is how long microprocessors will continue their “Moore’s law” pace of doubling capacity every 18 months. It may be premature to characterize the computer and IT industries as approaching maturity, but the collapse of the dot.com industry stock market bubble beginning in the late 1990s and the continuing consolidation of the industry suggest some caution about the expectation that this pace of progress can continue indefinitely.

Historically, new general-purpose technologies have been the drivers of productivity growth across broad sectors of the U.S. economy. It cannot be emphasized too strongly that if either scientific and technological limitations or cultural and institutional constraints should delay the emergence of new general-purpose technologies over the next several decades, they will surely result in a slowing of U.S. productivity growth. Endless novelty in the technical elaboration of existing general-purpose technologies can hardly be sufficient to sustain a high rate of economic growth. In the case of the general-purpose technologies that emerged as important sources of growth in the United States during the second half of the 20th century, it was primarily military and defense-related demand that initially drove these emerging technologies rapidly down their learning curves.

As the general-purpose technologies that were induced by defense R&D and procurement during the past half century mature, one must ask whether military and defense-related R&D and procurement will continue to be important sources of commercial technology development.

During the first two decades after World War II, it was generally taken as self-evident that substantial spinoff of commercial technology could be expected from military procurement and defense-related R&D. Although this assumption seemed reasonable in the 1950s and 1960s, the slowing of U.S. economic growth that began in the early 1970s called it into question.

Beginning in the mid-1980s and continuing into the mid1990s, the new conventional wisdom argued that “dual-use” military-commercial technology would resolve the problem of rising cost and declining quality in post–Cold War military procurement at the same time that it stimulated the civilian economy. The Clinton administration initially embraced, at least at the rhetorical level, the dual-use concept.

Clinton administration actions, however, helped to undermine the dual-use strategy. In 1993, Deputy Secretary of Defense William Perry announced an end to a half-century effort by the Department of Defense (DOD) to maintain rivalry among defense contractors producing comparable products such as tanks, aircraft, and submarines. The change in policy set off a flurry of mergers and acquisitions that reduced the ranks of the largest contractors (those with sales of over $1 billion) from 15 in 1993 to 4 in 1987. With substantially reduced competition in their defense and defense-related markets, the remaining contractors felt less pressure to pursue a dual-use technology development path.

In retrospect it seems clear that the dual-use and related efforts were badly underfunded. They encountered substantial resistance from both DOD and the large defense contractors. The 1994 Republican Congress, as part of a general attack on federal technology development programs, eliminated the budget for DOD’s Technology Reinvestment Program, which was intended to help convert defense-only R&D activities to a dual-use focus.

By the early 1990s, it was becoming clear that changes in the structure of the U.S. economy, of the defense industries, and of the defense industrial base had induced substantial skepticism that the military and defense-related R&D and procurement could continue to play an important role in the generation of new general-purpose commercial technologies. By the turn of the century, the share of output in the U.S. economy accounted for by the industrial sector had declined to 1ess than 15%. Defense procurement had become a smaller share of an economic sector that itself accounted for a smaller share of national economic activity. The absolute size of defense procurement had declined to less than half of the 1985 Cold War peak.

Since the end of the Cold War, the objectives of the defense agencies have shifted toward enhancing their capacity to respond to shorter-term tactical missions. Procurement shifted from a primary emphasis on new advanced technology to an emphasis on developing new processes and systems and to retrofitting legacy technologies. This trend was reinforced by an emerging consensus that the threat of system-level war ended with the Cold War. Many defense intellectuals had come to believe that major interstate wars among the great powers had virtually disappeared. The effect has been to reduce incentives to make long-term investments in defense and defense-related “big science” and “big technology.”

Would it take a major war, or threat of war, to induce the U.S. government to mobilize the necessary scientific, technological, and financial resources to develop new general-purpose technologies? If the United States were to attempt to mobilize the necessary resources, would the defense industries and the broader defense industrial base be capable of responding? It was access to large and flexible resources that enabled powerful bureaucratic entrepreneurs such as Leslie Groves (nuclear weapons), Hyman Rickover (nuclear submarines), Joseph Lickleider (computers), and Del Webb (satellites) to mobilize the scientific and technological resources necessary to move new general-purpose technologies from initial innovation toward military and commercial viability. The political environment that made this possible no longer exists for defense-related agencies and firms.

Private sources

Can private-sector entrepreneurship be relied on as a source of major new general-purpose technologies? Probably not. Most major general-purpose technologies have required several decades of public or private support to reach the threshold of commercial viability. Private firms see little value in investing in expensive high-risk research that might produce radical breakthroughs when the gains from advances in broadly useful technology are so diffuse that that they are difficult to capture.

Decisionmakers in the private sector rarely have access to capital that can wait decades or even a single decade for a return. Lewis Branscomb and his Harvard University colleagues note in Understanding Private Sector Decision Making for Early Stage Technology Development that many of the older research-intensive firms have almost completely withdrawn from the conduct of basic research and are making only limited investments in early-stage technology development.

IT IS DIFFICULT TO IMAGINE HOW THE PRIVATE SECTOR WILL, WITHOUT SUBSTANTIAL PUBLIC SUPPORT FOR R&D, BECOME AN IMPORTANT SOURCE OF NEW GENERAL-PURPOSE TECHNOLOGIES OVER THE NEXT SEVERAL DECADES.

Entrepreneurial firms have often been most innovative when they have had an opportunity to capture the economic rents opened up by complementary public investment in research and technology development. The U.S. commercial aircraft industry was unwilling to commit to jet aircraft until the reliability and fuel efficiency of the jet engine had been demonstrated by more than a decade of military experience. The development of the ARPANET in the early 1970s was preceded by more than a decade of R&D by the Advanced Research Projects Agency’s Information Technologies Office. It took another two decades of public support before a successful commercial system was developed. Even the most innovative firms often have great difficulty pursuing more than a small share of the opportunities opened up by their own research. It is difficult to imagine how the private sector will, without substantial public support for R&D, become an important source of new general-purpose technologies over the next several decades.

The conclusion that neither defense R&D and procurement nor private-sector entrepreneurship can be relied on as an important source of new general-purpose technologies forces a third question onto the agenda. Could a more aggressive policy of public support for R&D directed to commercial technology development become an important source of new general-purpose technologies?

Since the mid-1960s, the federal government has made a series of efforts to create programs in support of the development and diffusion of commercial technology. Except in the fields of agriculture and health, these efforts have had great difficulty in achieving economic and political viability. Funding of the programs authorized by the 1965 State Technical Services Act, which provided support for universities to provide technical assistance to small and medium-sized businesses, was a casualty of the Vietnam War. The very successful federal/private cooperative Advanced Technology Program of the National Bureau of Standards and Technology barely survived the congressional attacks on federal technology programs that took place after the 1994 midterm elections, and it has been under constant attack since. The SEMATECH semiconductor equipment consortium is another model of successful public/private cooperation in technology development, but it has not been replicated in other industries. The United States has not yet designed a coherent set of institutional arrangements for public support of commercial technology development. Furthermore, even the successful programs referred to here have been designed to achieve short-term incremental gains rather than the development of new general-purpose technologies.

R&D in molecular genetics and biotechnology is a major exception. I argued in Technology Growth and Development that molecular biology and biotechnology will be the source of the most important new general-purpose technologies of the early decades of the 21st century. For more than three decades, beginning in the late 1930s, the molecular genetics and biotechnology research leading to the development of commercial biotechnology products in the pharmaceutical and agricultural industries was funded almost entirely by private foundations, the National Science Foundation, the National Institutes of Health, and the national energy laboratories, and was performed largely at government and university laboratories.

When firms in the pharmaceutical and agricultural industries decided to enter the field in the 1970s, they found it necessary to make very substantial grants to and contracts with university laboratories to obtain a “window” on the advances in the biological sciences and in the techniques of biotechnology that were already under way in university laboratories. When defense agencies in the United States and the Soviet Union began to explore the development of bioweapons and their antidotes, they also found it necessary to tap expertise available only in university and health agency laboratories.

The fact that I do not see any general-use technology revolution on the horizon does not mean that one has not begun to develop. If I had been writing this article in the mid-1970s, I would not have noticed or appreciated the commercial potential of research on artificial intelligence that had been supported by the Defense Advanced Research Projects Agency’s Information Processing Office since the early 1960s. I certainly would not have anticipated the emergence or development of the Internet and its dramatic commercial and cultural effects. It is possible that one or more of the nanotechnologies will produce powerful new general-purpose technologies, perhaps in materials science or in the health sciences, but at this stage I find it difficult to separate solid scientific and technical assessment from the hype about nanotechnology’s promise.

If forced to guess the source of the next economy-rattling technological earthquake, I would name two scientific and technological challenges that are likely candidates because each is likely to attract the substantial public investment that I believe is essential to develop a new general-use technology.

One is in the area of infectious disease: the demand to develop the knowledge and technology to confront the coevolution of pests, pathogens, and disease with control agents. We have been increasingly sensitized to the effects of this coevolution by the resurgence of tuberculosis and malaria, the emergence of new diseases such as AIDS and Ebola, and the threat of a new global influenza epidemic. The coevolution of human, nonhuman animal, and crop plant pests, pathogens, and diseases with control technologies means that chemical and biological control technologies often become ineffective within a few years or decades. This means, in turn, that maintenance research—the research necessary to sustain present levels of health or protection—must rise continuously as a share of a constant research budget.

At present, health R&D tends to be highly pest- and pathogen-specific. It is not apparent that current research will generate broad general-purpose medical and health-related technologies that are capable of addressing the demand for long-term sustainable protection, but at least the possibility exists.

The second is the threat of climate change. Measurements taken in the late 1950s indicated that carbon dioxide (CO2) was increasing in the atmosphere. Beginning in the late 1960s, computer simulations indicated possible changes in temperature and precipitation that could occur due to human-induced emission of greenhouse gasses into the atmosphere.

By the early 1980s, a fairly broad consensus had emerged in the climate change research community that greenhouse gas emissions could, by 2050, result in a rise in global average temperature by 1.5° to 4.5°C (about 2.7° to 8.0°F) and a complex pattern of worldwide climate changes. By the early 2000s, it was clear, from increasingly sophisticated climate modeling exercises and careful scientific monitoring of Earth surface changes such as the summer melting of the north polar ice cap, that what oceanographer Roger Revelle had characterized as a “vast global experiment” was well under way. It was also apparent that an alternative to the use of car-bon-based fossil fuels would have to be found.

Modest efforts have been made since the mid-1970s to explore renewable energy technologies. Considerable progress has been made in moving down the learning curves for photovoltaics and wind turbines. The Bush administration has placed major emphasis on the potential of hydrogen technology to provide a pollution-free substitute for carbon-based fuels by the second half of this century. The environmental threats and economic costs of continued reliance on fossil fuel technologies are sufficiently urgent to warrant substantially larger public support in the form of private-sector R&D incentives and a refocusing of effort by the national energy laboratories on the development and diffusion of alternative energy technologies. A major effort could yield a technological surprise with widespread application.

To be realistic, however, I do not foresee the seeds of a technological revolution in these efforts. Although immensely important, the health and energy technologies that government is likely to pursue will not resolve the problem of achieving rapid U.S. economic growth. In both cases, the emphasis is likely to be on maintenance technologies, which are necessary to prevent the deterioration of health and environment but unlikely to transform the entire economy.

The United States is going to continue investing in basic research that will produce revolutions in scientific understanding, but preeminence in scientific research is only loosely linked to preeminence in technology development. In a number of U.S. high-technology industries, it has been military procurement that enabled firms to move rapidly down their technology learning curves. If defense procurement is not going to force the development of new general-purpose technologies, the United States will need to develop a new strategy for catalyzing radical technological progress.

Rethinking, Then Rebuilding New Orleans

New Orleans will certainly be rebuilt. But looking at the recent flooding as a problem that can be fixed by simply strengthening levees will squander the enormous economic investment required and, worse, put people back in harm’s way. Rather, planners should look to science to guide the rebuilding, and scientists now advise that the most sensible strategy is to work with the forces of nature rather than trying to overpower them. This approach will mean letting the Mississippi River shift most of its flow to a route that the river really wants to take; protecting the highest parts of the city from flooding and hurricane-generated storm surges while retreating from the lowest parts; and building a new port city on higher ground that the Mississippi is already forming through natural processes. The long-term benefits—economically and in terms of human lives—may well be considerable.

To understand the risks that New Orleans faces, three sources need to be considered. They are the Atlantic Ocean, where hurricanes form that eventually batter coastal areas with high winds, heavy rains, and storm surge; the Gulf of Mexico, which provides the water vapor that periodically turns to devastatingly heavy rain over the Mississippi basin; and the Mississippi River, which carries a massive quantity of water from the center of the continent and can be a source of destruction when the water overflows its banks. It also is necessary to understand the geologic region in which the city is located: the Mississippi Delta.

The Mississippi Delta is the roughly triangular plain whose apex is the head of the Atchafalaya River and whose broad curved base is the Gulf coastline. The Atchafalaya is the upstream-most distributary of the Mississippi that discharges to the Gulf of Mexico. The straight-line distance from the apex to the Atchafalaya Bay is about 112 miles, whereas the straight-line distance from the apex to the mouth of the Mississippi is twice as long, about 225 miles. (These distances will prove important.) The Delta includes the large cities of Baton Rouge and New Orleans on the Mississippi River, and smaller communities, such as Morgan City, on the Atchafalaya. (Although residents along the Mississippi River at many places considerably to the north of New Orleans commonly refer to their floodplain lands as “the Delta,” the smaller rivers and streams here empty directly into the Mississippi River, not the Gulf of Mexico, and hence geologists more properly call this region the alluvial plain of the Mississippi River.)

The Mississippi River builds, then abandons, portions (called “lobes”) of the Delta in an orderly cycle: six lobes in the past 8,000 years (Fig. 1). A lobe is built through the process of sediment deposition where the river meets the sea. During seasonal floods, the river spreads over the active lobe, depositing sediment and building the land higher than sea level. But this process cannot continue indefinitely. As the lobe extends further into the sea, the river channel also lengthens. A longer path to the sea means a more gradual slope and a reduced capacity to carry water and sediment. Eventually, the river finds a new, shorter path to the sea, usually down a branch off the old channel. The final switching of most of the water and sediment from the old to the new channel may be triggered by a major flood that scours and widens the new channel. Once the switch occurs, the new lobe gains ground while the old lobe gradually recedes because the sediment supply is insufficient to counteract sea level rise, subsidence of the land, and wave-generated coastal erosion.

Figure 1. Mississippi Delta switching. The succession of different river channels and delta lobes during the past 5,000 years are numbered from oldest (1) to youngest (7). (Meade, 1995 U.S. Geological Survey Circular 1133, fig. 4C; also see Törnqvist et al., 1996, for an updated chronology.)

Geologist Harold Fisk predicted in 1951 that sometime in the 1970s, the Mississippi River would switch much of its water and sediment from its present course past New Orleans to its major branch, the Atchafalaya River. In order to maintain New Orleans as a deepwater port, the U.S. Army Corps of Engineers in the late 1950s constructed the Old River Control Structure, a dam with gates that essentially meters about 30% of the Mississippi River water down the Atchafalaya and keeps the remainder flowing in the old channel downstream toward New Orleans. Trying to meter the flow carries its own risks. During the 1973 flood on the Mississippi, the torrent of water scoured the channel and damaged the foundation of the Old River Control Structure. If the structure had failed, then the flood of 1973 would have been the event that switched the Mississippi into its new outlet—the Atchafalaya River—to the Gulf. The Corps repaired the structure and built a new Auxiliary Structure, completed in 1985, to take some of the pressure off the Old River Control Structure. The Mississippi kept rolling along.

Still, the fact remains that the “new” Atchafalaya lobe is actively building, despite receiving only one-third of the Mississippi water and sediment, while the old lobe south of New Orleans is regressing, leaving less and less of a coastal buffer between the city and hurricane surges from the Gulf. This situation has major implications.

Nature’s protection

At one time, the major supplier of sediment to the Mississippi Delta was the Missouri River, the longest tributary of the Mississippi River. However, the big reservoirs constructed in the 1950s on the upper Missouri River now trap much of the sediment, with the result that the lower Mississippi now carries about 50% less sediment (Fig. 2). It is ironic that the reservoirs on the Missouri, whose purposes include flood storage to protect downstream areas, entrap the sediments needed to maintain the Delta above sea level and flood level. Much less fine sediment (silt and clay) flows downstream to build up the Delta during seasonal floods, and much of this sediment is confined between human-made levees all the way to the Gulf, where it spills into deep water. Coarser sediment (sand) trapped in upstream reservoirs or dropped into deep water likewise cannot carry out its usual ecological role of contributing to the maintenance of the islands and beaches along the Gulf, and beaches can gradually erode away because the supply of sand no longer equals the loss to along-shore currents and to deeper water.

If Hurricane Katrina, which in 2005 pounded New Orleans and the Delta with surge and heavy rainfall, had followed the same path over the Gulf 50 years ago, the damage would have been less, because more barrier islands and coastal marshes were available then to buffer the city. Early settlers on the barrier islands offshore of the Delta built their homes well back from the beach, and they allowed driftwood to accumulate where it would be covered by sand and beach grasses, forming protective dunes. The beach grasses were essential because they helped stabilize the shores against wind and waves and continued to grow up through additional layers of sand. In contrast to a cement wall, the grasses would recolonize and repair a breach in the dune. (A similar lesson can be taken from the tsunami-damaged areas of the Indian Ocean. Damage was less severe where mangrove forests buffered the shorelines than where the land had been cleared and developed to the shoreline.) Vegetation offers resistance to the flow of water, so the more vegetation a surge encounters before it reaches a city, the greater the damping effect on surge height. The greatest resistance is offered by tall trees intergrown with shrubs; next are shorter trees intergrown with shrubs; then shrubs; followed by supple seedlings or grasses; and finally, mud, sand, gravel, or rock with no vegetation.

One of the major factors determining vegetation type and stature is land elevation. In general, marsh grasses occur at lower elevations because they tolerate frequent flooding. Trees occur at higher elevations (usually 8 to 10 feet above sea level) because they are less tolerant of flooding. Before European settlement, trees occurred on the natural levees created by the Mississippi River and its distributaries, and on beach ridges called “cheniers” (from the French word for oak) formed on the mudflats along the Gulf coast. The cheniers were usually about 10 feet high and 100 or so feet wide, but extended for miles, paralleling the coast. Two management implications can be derived from this relationship between elevation and vegetation: Existing vegetation provides valuable wind, wave, and surge protection and should be maintained; and the lines of woody vegetation might be restored by allowing the Mississippi and its distributaries to build or rebuild natural levees during overbank flows and by using dredge spoil to maintain the chenier ridges and then planting or allowing plant recolonization to occur.

Figure 2. The sediment loads carried by the Mississippi River to the Gulf of Mexico have decreased by half since 1700, so less sediment is available to build up the Delta and counteract subsidence and sea level rise. The greatest decrease occurred after 1950, when large reservoirs constructed trapped most of the sediment entering them. Part of the water and sediment from the Mississippi River below Vicksburg is now diverted through the Corps of Engineers’ Old River Outflow Channel and the Atchafalaya River. Without the controlling works, the Mississippi would have shifted most of its water and sediment from its present course to the Atchafalaya, as part of the natural delta switching process. The widths of the rivers in the diagram are proportional to the estimated (1700) or measured (1980–1990) suspended sediment loads (in millions of metric tons per year). (Meade, 1995 U.S. Geological Survey Millions of metric tons of Circular 1133, fig. 6A).

Of course, the vegetation has its limits: Hurricanes uproot trees and the surge of salt or brackish water can kill salt-intolerant vegetation. Barrier islands, dunes, and shorelines can all be leveled or completely washed away by waves and currents, leaving no place for vegetation to grow. The canals cut into the Delta for navigation and to float oil-drilling platforms out to the Gulf disrupted the native vegetation by enabling salt or brackish water to penetrate deep into freshwater marshes. The initial cuts have widened as vegetation dies back and shorelines erode without the plant roots to hold the soil and plant leaves to dampen wind- or boat-generated waves. The ecological and geological sciences can help determine to what extent the natural system can be put back together, perhaps by selective filling of some of the canals and by controlled flooding and sediment deposition on portions of the Delta through gates inserted in the levees.

The Mississippi River typically floods once a year, when snowmelt and runoff from spring rains are delivered to the mainstem river by the major tributaries. Before extensive human alterations of the watersheds and the rivers, these moderate seasonal floods had many beneficial effects, including providing access to floodplain resources for fishes that spawned and reared their young on the floodplains and supporting migratory waterfowl that fed in flooded forests and marshes. The deposition of nutrient-rich sediments on the floodplain encouraged the growth of valuable bottomland hardwood trees, and the floodwaters dispersed their seeds.

Human developments in the tributary watersheds and regulation of the rivers have altered the natural flood patterns. In the Upper Mississippi Basin, which includes much of the nation’s corn belt, 80 to 90% of the wetlands were drained for agriculture; undersoil drain tubes were installed and streams were channelized, to move water off the fields as quickly as possible so that farmers could plant as early as possible. Impervious surfaces in cities and suburbs likewise speed water into storm drains that empty into channelized streams. The end result is unnaturally rapid delivery of water into the Upper Mississippi and more frequent small and moderate floods than in the past. In the arid western lands drained by the Missouri River, the problem is shortage of water; it is this phenomenon that led to the construction of the huge reservoirs to store floodwaters and use them for irrigating crops in the Dakotas, while also lowering flood crest levels in the downstream states of Nebraska, Iowa, and Missouri.

In all of the tributaries of the Mississippi, the floodplains have been leveed to various degrees, so there is less capacity to store or convey floods (as well as less fish and wildlife habitat), and the same volume of water in the rivers now causes higher floods than in the past. On tributaries with flood storage reservoirs, the heights of the moderate floods that occur can be controlled. On other tributaries, flood heights could be reduced by restoring some of the wetlands in the watersheds; constructing “green roofs” that incorporate vegetation to trap rainfall; adopting permeable paving; building stormwater detention basins in urban and suburban areas; and reconnecting some floodplains with their rivers. Between Clinton, Iowa, and the mouth of the Ohio River, 50 to 80% of the floodplain has been leveed and drained, primarily for dry-land agriculture. On the lower Mississippi River, from the Ohio River downstream and including the Delta, more than 90% has been leveed and drained. Ironically, levees in some critical areas back up water on other levees. In such areas, building levees higher is fruitless—it simply sets off a “levee race” that leaves no one better off. In the Delta, the additional weight of higher, thicker levees themselves can cause further compaction and subsidence of the underlying sediments.

The occasional great floods on the Mississippi are on a different scale than the more regular moderate floods. It takes exceptional amounts of rain and snowmelt occurring simultaneously in several or all of the major tributary basins of the Mississippi (the Missouri, upper Mississippi, Ohio, Arkansas, and Red Rivers) to produce an extreme flood, such as the one that occurred in 1927. That flood broke levees from Illinois south to the Gulf of Mexico, flooding an area equal in size to Massachusetts, Connecticut, New Hampshire, and Vermont combined, and forcing nearly a million people from their homes. With so much rain and snowmelt, wetlands, urban detention ponds, and even the flood control reservoirs are likely to fill up before the rains stop.

Protection shortfalls

In order to protect New Orleans from such great floods, the Corps of Engineers plans to divert some floodwater upstream of the city. Floodwater would be diverted through both the Old River Control Structure and the Morganza floodway to the Atchafalaya River; and through the Bonnet Carré Spillway (30 miles upstream of New Orleans) into Lake Ponchartrain, which opens to the Gulf. All of these structures and operating plans are designed to safely convey a flood 11% greater in volume than the 1927 flood around and past New Orleans.

But what is the risk that an even greater flood might occur? How does one assess the risk of flooding and determine whether it makes more sense to move away than to rebuild?

The Corps of Engineers estimates flood frequencies based on existing river-gauging networks and specifies levee designs (placement, height, thickness) accordingly. The resulting estimates and flood protection designs are therefore based on hydrologic records that cover only one to two centuries, at most. Yet public officials may ask for levees and flood walls that provide protection against “100-year” or even “1,000-year” floods—time spans that are well beyond most existing records. Because floods, unlike trains, do not arrive on a schedule, these terms are better understood as estimates of probabilities. A 100-year levee is designed to protect against a flood that would occur, if averaged over a sufficiently long period (say 1,000 years), once in 100 years. This means that in any given year, the risk that this levee will fail is estimated, not guaranteed, to be 1% (1 in 100). If 99 years have passed since the last 100-year flood, the risk of flooding in year 100 is still 1%. In contrast to other natural hazards, such as earthquakes, the probability of occurrence does not increase with time since the last event. (Earthquakes that release strain that builds gradually along fault lines do have an increased probability of occurrence as time passes and strain increases.)

In essence, engineers assume that the climate in the future will be the same as in the recently observed past. This may be the only approach possible until scientists learn more about global and regional climate mechanisms and can make better predictions about precipitation, runoff, and river flows. However, the period of record can be greatly extended, thereby making estimates of the frequency of major floods much more accurate than by extrapolating from 200-year records of daily river levels. Sediment cores from the Mississippi River and the Gulf of Mexico record several episodes of “megafloods” along the Mississippi River during the past 10,000 years. These megafloods were equivalent to what today are regarded as 500-year or greater floods, but they recurred much more frequently during the flood-prone episodes recorded in the sediment cores. The two most recent episodes occurred at approximately 1000 BC and from about 1250 to 1450 AD. There is independent archeological evidence that floods during these episodes caused disruptions in the cultures of people living along the Mississippi, according to archeologist T. R. Kidder.

These flood episodes occurred much more recently than the recession of the last ice sheet, and therefore they were not caused by melting of the ice or by catastrophic failures of the glacial moraines that acted as natural dams for meltwater. They were most likely caused by periods of heavy rainfall over all or portions of the Mississippi Basin. Until more is known about climate mechanisms, it is prudent to assume that such megafloods will happen again. Thus, this possibility must be taken into account in designing flood protection for New Orleans, especially if public officials are serious about their expressed desire to protect the city against 1,000-year floods.

Building a “new” New Orleans

If New Orleans is to be protected against both hurricane-generated storm surges from the sea and flooding from the Mississippi River, are there alternative cost-effective approaches other than just building levees higher, diverting floods around New Orleans, and continuing the struggle to keep the Mississippi River from taking its preferred course to the sea? Yes, as people in other parts of the world have demonstrated.

The Romans used the natural and free supply of sediment from rivers to build up tidal lands in England (and probably also in the southern Netherlands) that they wished to use for agriculture. People living along the lower Humber River in England developed this to a high art in the 18th century, in a practice called “warping.” They had the same problems with subsidence as Louisiana, but they encouraged the sediment-laden Humber River to flood their levee districts (called “polders”) when the river was observed to be most turbid and therefore carrying its maximum sediment load. The river at this maximum stage was referred to as a “fat river” and inches of soil could be added in just one flood event. People of this time also recognized the benefits of marsh cordgrass, Spartina, in slowing the water flow, thereby encouraging sedimentation, and subsequently in anchoring the new deposits against resuspension by wind-generated waves or currents.

Could the same approach be taken in the Delta, in the new Atchafalaya lobe? Advocates for rebuilding New Orleans in its current location point to the 1,000-year+ levees and storm surge gates that the Dutch have built. But the Netherlands is one of the most densely populated countries in Europe, with 1,000 people per square mile, so the enormous cost of building such levees is proportional to the value of the dense infrastructure and human population there. The same is not true in Louisiana, where there are approximately 100 people per square mile, concentrated in relatively small parcels of the Delta. This low population density provides the luxury of using Delta lands as a buffer for the relatively small areas that must be protected.

However, the Dutch should be imitated in several regards. First, planners addressing the future of New Orleans should take a lesson from the long-term deliberate planning and project construction undertaken by the Dutch after their disastrous flood of 1953. These efforts have provided new lands and increased flood protection along their coasts and restored floodplains along the major rivers. Some of these projects are just now being realized, so the planning horizon was at least 50 years.

Figure 3. The old parts of New Orleans, including the French Quarter, were built on the natural levees created by the Mississippi River (red areas along the river in the figure), well above sea level. In contrast, much of the newer city lies below sea level (dark areas). Flooding of the city occurred when the storm surge from Hurricane Katrina entered Lakes Pontchartrain and Borgne and backed up the Gulf Intracoastal Waterway (GIWW), the Mississippi River Gulf Outlet (MRGO), and several industrial and drainage canals. The LAKE walls of the canals were either overtopped or failed in several places, allowing water to flood into the city. (Courtesy of Center for the Study of Public Aspects of Hurricanes, as modified by Hayes, 2005.)

Planners focusing on New Orleans also would be wise to emulate Dutch efforts to understand and work with nature. Specifically, they should seek and adopt ways to speed the natural growth and increase the elevation of the new Atchafalaya lobe and to redirect sediment onto the Delta south of New Orleans to provide protection from storm waves and surges. A key question for the Federal Emergency Management Agency (FEMA), the FEMA equivalents at the state level, planners and zoning officials, banks and insurance companies, and the Corps of Engineers is whether it is more sustainable to rebuild the entire city and a higher levee system in the original locations or to build a “new” New Orleans somewhere else, perhaps on the Atchafalaya lobe.

Under this natural option, “old” New Orleans would remain a national historic and cultural treasure, and continue to be a tourist destination and convention city. Its highest grounds would continue to be protected by a series of strengthened levees and other flood-control measures. City planners and the government agencies (including FEMA) that provide funding for rebuilding must ensure that not all of the high ground is simply usurped for developments with the highest revenue return, such as convention centers, hotels, and casinos. The high ground also should include housing for the service workers and their families, so they are not consigned again to the lowest-lying, flood-prone areas. The flood-prone areas below sea level should be converted to parks and planted with flood-tolerant vegetation. If necessary, these areas would be allowed to flood temporarily during storms.

Work already is under way that might aid such rebuilding efforts and help protect the city during hurricanes. The Corps of Engineers, in its West Bay sediment diversion project, plans to redirect the Mississippi River sediment, which currently is lost to the deep waters of the Gulf, to the south of the city and use it to create, nourish, and maintain approximately 9,800 acres of marsh that will buffer storm waves and surges.

At the same time, the Corps, in consultation with state officials, should guide and accelerate sediment deposition in the new Atchafalaya lobe, under a 50- to 100-year plan to provide a permanent foundation for a new commercial and port city. If old New Orleans did not need to be maintained as a deepwater port, then more of the water and sediment in the Mississippi could be allowed to flow down the Atchafalaya, further accelerating the land-building. The new city could be developed in stages, much as the Dutch have gradually increased their polders. The port would have access to the Mississippi River via an existing lock (constructed in 1963) that connects the Atchafalaya and the Mississippi, just downstream of the Old River Control Structure.

This plan will no longer force the Mississippi River to go down a channel it “wants” to abandon. The shorter, steeper path to the sea via the Atchafalaya might require less dredging than the Mississippi route, because the current would tend to keep the channel scoured. Because the Mississippi route is now artificially long and much less steep, accumulating sediments must be constantly dredged, at substantial cost. Traditional river engineering techniques that maintain the capacity of the Atchafalaya to bypass floodwater that would otherwise inundate New Orleans also might be needed to maintain depths required for navigation. These techniques include bank stabilization with revetments and wing dikes that keep the main flow in the center of the channel where it will scour sediment.

The new city would have a life expectancy of about 1,000 years—at which time it would be an historic old city— before the Mississippi once again switched. The two-city option might prove less expensive than rebuilding the lowest parts of the old city, because the latter approach probably would require building flood gates in Lake Ponchartrain and new levees that are high enough and strong enough to withstand 500- or 1,000-year floods. In both scenarios, flood protection will need to be enhanced through a continual program of wetland restoration.

In evaluating these options, the Corps of Engineers should place greater emphasis on the 9,000 years of geological and archaeological data related to the recurrence of large floods along the Mississippi River. Shortly before the recent hurricanes hit the region, the Corps had completed a revised flood frequency analysis for the upper Mississippi, based solely on river gauge data from the past 100 to 200 years. Unless the Corps considers the prehistoric data, it probably will continue to underestimate the magnitude and frequency of large floods. If the Corps does take these data into account in determining how high levees need to be and what additional flood control works will be needed to prevent flooding in New Orleans and elsewhere, then the actual costs of the “traditional” approach are likely to be much higher than currently estimated. The higher costs will make the “working with nature” option even more attractive and economically feasible.

The Corps also should include in its assessments the gradual loss of storage capacity (due to sedimentation) in existing flood control reservoirs in the upstream Mississippi Basin, as well as the costs and benefits associated with proposed sediment bypass projects in these reservoirs. For example, the Corps undertook preliminary studies of a sediment-bypass project in the Lewis and Clark Reservoir on the upper Missouri River in South Dakota and Nebraska because the reservoir is predicted to completely fill with sediment by 2175, and most of its storage capacity will be lost well before then. By starting to bypass sediments within the next few years, the remaining water storage capacity could be prolonged, perhaps indefinitely. But studies showed that the costs exceeded the expected benefits. In these studies, however, the only benefits considered were the maintenance of water storage capacity and its beneficial uses, not the benefits of restoring the natural sediment supply to places as far downstream as the Delta. It is possible that the additional sediment would significantly accelerate foundation-building for “new” New Orleans and the rebuilding of protective wetlands for the old city. Over the long term, the diminishing capacities of such upstream storage reservoirs also will add to the attractiveness of more natural options, including bypassing sediments now being trapped in upstream reservoirs, utilizing the sediments downstream on floodplains and the Delta, and restoring flood conveyance capacity on floodplains that are now disconnected from their rivers by levees.

Action to capitalize on the natural option should begin immediately. The attention of the public and policymakers will be focused on New Orleans and the other Gulf cities for a few more months. The window of opportunity to plan a safer, more sustainable New Orleans, as well as better flood management policy for the Mississippi and its tributaries, is briefly open. Without action, a new New Orleans— a combination of an old city that retains many of its historic charms and a new city better suited to serve as a major international port—will go unrealized. And the people who would return to a New Orleans rebuilt as before, but with higher levees and certain other conventional flood control works, will remain unduly subject to the wrath of hurricanes and devastating floods. No one in the Big Easy should rest easy with this future.