Are All Market-Based Environmental Regulations Equal?

Economists have long advocated using the market to achieve environmental objectives. Possible policies include taxes on waste emissions and programs through which the government limits pollution by issuing emission permits that can be traded among companies. Unlike “command and control” regulations that specify which technology companies must use, market approaches such as these allow companies flexibility to choose their own ways to reduce pollution at lowest cost.

But which market-based instruments are best? Research suggests that the economic costs of tradable permits can be much larger than those of emissions taxes if the permits are given out free or grandfathered to firms. Emissions taxes provide revenues that can be recycled in tax reductions that increase employment. In addition, tradable permits can result in disproportionately high costs for the poor, whereas the revenues from emissions taxes can be converted into tax cuts that help the poor.

Policymakers are currently debating proposals to implement nationwide tradable permit programs for nitrogen oxide and mercury and to strengthen the existing allowance trading program for sulfur dioxide. The tradable permits approach pioneered by the United States is also receiving a great deal of attention throughout the world as a possible tool in managing greenhouse gas emissions. Before proceeding further down this path, policymakers should acquire a better understanding of the full implications of each of their policy options.

Measuring economic effects

To understand how environmental policies interact with the broader fiscal system, start with the so-called “double dividend” argument. Revenues generated from environmental taxes (or auctioned emissions permits) can be used to pay for cuts in labor taxes, such as income and payroll taxes. At first glance, these measures appear to increase employment while simultaneously reducing pollution, thereby producing a double dividend. The argument is particularly appealing for climate change policies, where the revenue potential is so large.

Suppose, for example, that the United States were to recommit to its Kyoto agreement pledge to reduce annual carbon emissions to 7 percent below 1990 levels by 2010. This might be achieved by a tax on the carbon content of fossil fuels of anywhere between $50 and $150 per ton of carbon, depending on whether the United States could buy credits for carbon reductions overseas. Setting a hypothetical carbon tax at $75, revenues would be around $90 billion per year–about one-sixth of the entire federal receipts from personal income taxes.

But the effect of a tax would not end there. Environmental taxes raise the cost of producing products, acting as a tax on economic activity. This means that (before tax revenue recycling) the levels of production and employment are lower than they would be in the absence of the emissions tax. The increase in government revenue would make it possible to reduce income taxes for workers, which would boost employment somewhat. Although this would soften the effect of the carbon tax, most studies find that the net result would be a decrease in jobs. This doesn’t make the carbon tax bad policy, but it does undermine the argument of those who claim that a carbon tax with revenues recycled into income tax reductions would increase employment.

The effects of tradable permits are similar to those of emissions taxes. They raise production costs and reduce economic activity. If a polluting company increases production, it must either buy permits to cover the extra emissions or forego sales of its own permits to other companies. Either way, it pays a financial penalty for producing emissions. Permits that are distributed to firms for free have adverse effects on employment in the same way that emissions taxes do, but without the potential offsetting benefits of recycling government revenue in other tax reductions. This has two important policy implications.

First, if permits were auctioned off by the government rather than given away for free, then society would be better off, as long as revenue from permit sales would be recycled into other tax reductions. Tax economists have estimated that for each dollar of revenue used to reduce income taxes, there will be a gain in economic efficiency of around 20 to 50 cents. Lower income taxes increase employment, and they also reduce distortions in the pattern of expenditure between ordinary spending and “tax-favored” spending such as owner-occupied housing and employer-provided medical insurance. In the carbon example above, the United States might be better off to the tune of $20 billion to $45 billion per year if it had a carbon tax or auctioned permits policy rather than grandfathered permits.

Second, the economic cost of grandfathered permits can be substantially higher than previously thought. According to an estimate by Roberton Williams, Lawrence Goulder, and myself, the cost to the United States of meeting the initial Kyoto target by a system of grandfathered permits imposed on fossil fuel producers rises from roughly $25 billion per year (in current dollars) to around $55 billion when their effect on reducing employment and compounding labor tax distortions is included.

In fact, taking account of fiscal interactions might compromise the ability of grandfathered permits to generate overall net benefits for society. Suppose that global environmental damage from carbon emissions (for example, economic damage to world agriculture from climate change and the costs of protecting valuable land against sea level rises) was $70 per ton. (This is actually on the high side compared with most damage studies, although the estimates are subject to much dispute.) The initial Kyoto target for the United States would have reduced carbon emissions by around 630 million tons in 2010, implying annual environmental benefits of around $45 billion in current dollars. Using our cost figures and ignoring fiscal interactions, the grandfathered permit scheme would produce an estimated $20 billion in environmental benefits. But include fiscal interactions, and the policy fails the cost/benefit test, because costs exceed environmental benefits by $10 billion. Only the emissions tax/auctioned permit policies pass the cost/benefit test, producing net benefits of between $10 billion and $35 billion.

Fiscal interactions do not always have such striking implications for the economic performance of environmental policies. Consider the sulfur dioxide program of grandfathered permits, which has reduced power plant emissions by about 50 percent, or 10 million tons. Annual benefits from the program, mainly from reduced mortality, have been measured at more than $10 billion per year. Estimates of the annual cost of the program, including fiscal interactions, are only $1.7 billion under the existing grandfathered scheme or $1.2 billion if the permits are auctioned rather than grandfathered. Regardless of whether permits are auctioned or not, estimated benefits swamp the costs of the sulfur dioxide program.

Who benefits?

Of course, there is no guarantee that the revenue from environmental taxes or permit auctions will be used wisely, but evidence from Europe indicates that it can be. Denmark recently introduced a package of taxes on sulfur dioxide, carbon dioxide, fossil fuels, and electricity. Revenues, which amount to about 3 percent of gross domestic product, have mainly been used to lower personal and payroll taxes.

One potential obstacle to green tax shifts, however, is the conflicting objectives of different government agencies. An environmental agency would typically be concerned about setting rates to meet a particular environmental goal without regard to revenue. The treasury department might be more concerned about revenue, regardless of whether the taxes lead to under- or overshooting environmental objectives.

But suppose tax cuts don’t happen? Suppose that in the United States revenue from environmental taxes or auctioned permits were used to reduce the federal budget deficit instead? This implies that taxes could be lower in the future and still cover debt interest and repayment of principal. There would still be an economic gain from lower taxes, although one that is deferred to the future.

What if the revenue were used to finance additional public spending? The bulk of federal spending consists of transfer payments, such as Social Security, or expenditures that effectively substitute for private spending, such as medical care and education. Loosely speaking, the private benefit to people from a billion dollars of this type of spending is a billion dollars. But the benefits to society might be greater if the spending is achieving some distributional objective such as a safety net for the poor. If instead the revenue financed cuts in distortionary taxes, households would receive the billion dollars back, and on top of this there is a gain in economic efficiency as the distortionary effect of taxes on employment and so on are reduced. Thus, the social benefits from extra transfer spending could be larger or smaller than the benefits from cutting taxes, depending on the particular spending program.

It makes sense to avoid environmental policies that increase income inequality.

Governments also provide public goods (such as defense, crime prevention, and transport infrastructure) that private companies usually do not. People may value a billion dollars of this spending at more than a billion dollars. If they do, the benefits from this form of revenue recycling may also be as large as (or larger than) benefits from reducing taxes.

Policymakers might also be concerned about the effects of environmental policies on different income groups. Unfortunately, environmental and distributional objectives often appear to be in conflict. A number of studies suggest that the burden on households from environmental regulation imposed on power plants, refineries, and vehicle manufacturers is moderately regressive. The increase in prices as producers pass on the costs of regulations tends to hurt lower-income groups disproportionately, because they spend more of their income on polluting products than better-off households do.

But distributional concerns should not be an excuse for avoiding action on serious environmental problems. Pollution control measures should be evaluated mainly by weighing their environmental benefits against their economic costs for society as a whole. Distributional objectives are much better addressed by altering the income tax system or providing a safety net through the benefit system.

It still makes sense, however, to avoid environmental policies that increase income inequality. That is a major drawback of grandfathered emissions permits. When the government gives away rights to pollute for free, companies acquire an asset with market value. This enhances their net worth. The increase in company equity values in turn leads to more profits for shareholders, either directly through higher dividends and capital gains or indirectly though their holdings in retirement accounts. Stock ownership is highly skewed toward the rich; the top income quintile owns about 60 percent of stocks, whereas the bottom income quintile owns less than 2 percent. Using annually allocated grandfathered permits to meet the original U.S. carbon pledge under Kyoto could transfer more than $50 billion each year in pretax income or larger retirement assets to the top income quintile. Thus, higher-income groups can benefit greatly from grandfathered permits, with their windfall gains easily outweighing their income losses from higher product prices. Poor households, by contrast, are worse off. According to a study by the Congressional Budget Office, grandfathered permits to reduce U.S. carbon emissions by 15 percent would cut the annual real spending power of the lowest-income quintile by around $500 per household, while increasing that for the top income quintile by around $1,500 per household.

Auctioned emissions permits and emissions taxes do not create windfall gains to shareholders. Instead, the government obtains revenue that can be returned to households in a distributionally neutral manner, such as proportional reductions in all marginal income tax rates, or in ways that disproportionately benefit the poor, such as increasing personal allowances.

Making choices

One reason why emissions permits (whether auctioned or not) might be preferable to an emissions tax is because the emissions reduction that a given tax rate will induce may not be known in advance. Suppose, for example, that an environmental agency wishes to prevent a lake from being polluted beyond a certain threshold that will harm aquatic life or make the lake unsuitable for recreation. If companies must obtain permits for each unit of emissions they put into the lake, the agency can limit pollution below the threshold with certainty simply by limiting the number of permits. Under an emissions tax, where the amount of pollution abatement that will be achieved is initially uncertain, the agency will be in the uncomfortable position of having to adjust and readjust the emissions tax to ensure that the pollution target is attained.

On the other hand, an emissions tax puts a ceiling on program costs. If abatement is very costly, companies will avoid it and instead pay a larger tax bill. Under a permit scheme, companies are forced to reduce pollution by the limit on permits, no matter how costly the required abatement turns out to be. These considerations have led some economists to recommend a hybrid policy, or “safety valve.” Under this scheme, a limited number of permits would be issued with the aim of hitting a desired environmental goal, but the government would sell additional permits if the permit price reached an unacceptably high level. That way, the stringency of the environmental goal is relaxed somewhat if abatement costs turn out to be particularly large.

Although environmental policies that raise government revenue are appealing (so long as the revenues are not used for pork-barrel spending projects), the political reality is that interest groups in the United States do not eagerly hand money over to the government. It is no accident that grandfathered permits have been more common than taxes or auctions. We can expect a battle over how much if anything should be extracted from the regulated industries. Political compromise will often lead to lower tax rates or a tax that applies to only a limited number of activities. That’s politics.

But what if the choice is between grandfathered emissions permits or nothing? In the case of the sulfur dioxide trading program, grandfathered permits appear to be a good idea in spite of the drawbacks emphasized here. But even in the context of climate policies, I believe that a system of grandfathered carbon permits, appropriately scaled, would be preferable to doing nothing. For one thing, tradable permits provide incentives for the development of cleaner production methods. This offers hope that the next generation of Americans will be substantially less dependent on fossil fuels than the present one. Moreover, as people become more receptive to the idea of tradable permits, it is conceivable that the government may hold back an increasing portion of the annual carbon allowances for auctioning. This is a gradual approach that could at least put us on the path away from the idea that pollution permits should be bestowed freely, with the nation getting nothing in return.

Public Views of Science Issues

Who should control the human genes used in research? (a) George Bush, (b) Leon Kass, (c) corporate America, (d) Kofi Annan, (e) you? If you said (d) or (e) you are in tune with the views of the general public, according to a recent poll. If you guessed wrong, you may be in for more surprises. Science policy experts assume that they have a pretty good understanding of where the public stands on current science-related debates. But sometimes it makes sense to ask.

The Center for Science, Policy and Outcomes (CSPO), a Columbia University think tank based in Washington, D.C., commissioned a poll of a representative sample of 1,000 geographically dispersed U.S. residents over the age of 18. The poll focused on core questions such as who should control science and who benefits from science, but also included more specific questions about scientific research issues, especially in areas of medical research and biotechnology. The results of the poll are surprisingly diverse. About most issues there is no “public mind” but rather “demographic group minds.” The respondents do have opinions about science and technology (S&T) issues but those opinions are neither simple nor predictable.

Who benefits most and least from the ways science and technology change the world?

The most striking result from this question is the broad consensus on who benefits least from S&T change. According to 70 percent of people with incomes less than $20,000, as well as 70 percent of those who earn more than $100,000, the poor benefit least. The idea that everyone gains from the S&T cornucopia seems to be an unsustainable myth. It is also a point that deserves more than passing notice from policymakers. When making research funding decisions, they might want to ask not only “for what?” but also “for whom?”

Figure 1

Figure 2

Would you be interested in slowing down or halting your own aging process, if the technology were to become available?

The first goal listed in Healthy People 2010, the National Institutes of Health’s strategic guidepost, is “to help individuals of all ages increase life expectancy and improve their quality of life.” Who could argue against slowing down the aging process? Apparently, most of us. Only 40.3 percent of those polled indicated that they would be interested. Even among those over 65, fewer than half were interested.

Figure 3

Is quality of life likely to be improved or harmed by research into cloning to provide cells that could be used to treat various diseases?

With this question, the variance in views among demographic groups is revealing. Among persons with incomes over $100,000 per year, 76 percent expect quality of life improvements, whereas only 45 percent among people with incomes under $20,000 share this expectation. Similarly, whereas 65 percent of those aged 18-34 expect improvement, only 43 percent of persons over 65 expect to have quality of life improved by cloning research. There is also a cloning gender gap: 66 percent of men, but only 49 percent of women, think cloning research will lead to improvements.

Figure 4

Figure 5

Who should have control over genes used in research?

Although distrust of international organizations is relatively common among Americans, in this case they seem to be more comfortable with international control than with their own government or corporations.

Figure 6

How much input should the public and the government have over technological change?

We apparently should spend more time polling the public about their views on S&T, because they clearly believe that they should be consulted. In fact, they are yet to be convinced that the government should have a prominent role.

Figure 7

The Developing World’s Motorization Challenge

Motorization is transforming cities and even rural areas of the developing world. The economic and social benefits are enormous. It provides individual flexible transportation in urban areas and reduced manual labor and improved market access in rural areas. In the longer term, however, motorization may stifle local development, increase pollution, and create unprecedented safety hazards. Without careful attention to the motorization process, disaster looms for cities of the developing world–disaster from which the industrialized countries cannot be isolated.

In rural areas and small cities of China and India, millions of small indigenous three- and four-wheel “agricultural” trucks are proliferating. In China, these vehicles are banned in large cities because of their slow speed and high emissions, but agricultural vehicle sales in China still outnumber those of conventional cars and trucks by more than five to one. Costing anywhere from $400 to $3,000 each, these vehicles are the heart of millions of small businesses that transport farm products to local markets and that move construction materials and locally manufactured products; they also serve as the principal mode of motorized travel in rural areas. They are analogous to the Model T in the United States. Agricultural vehicles are essential to local economic development and to the creation of entrepreneurial business activity in rural areas.

Motorization in cities is also soaring and highly valued. Personal vehicles, from scooters to large company cars, provide a high level of access to goods, services, and activities, as well as unmatched freedom. They provide access to an expanded array of job and educational opportunities. For many people, vehicles are also desirable as a status symbol and a secure and private means of travel. For businesses, they are an efficient means of increasing productivity.

But personal mobility and motorization also impose enormous costs, especially in cities. The well-known litany of costs includes air and noise pollution, neighborhood fragmentation from new and expanded expressways, and high energy use. There are also costs with global implications. Motorization is the largest consumer of the world’s petroleum supplies, making it central to international concerns over energy security and political stability in volatile regions. And it is an increasingly greater source of greenhouse gas (GHG) emissions contributing to climate change. Worldwide, GHGs are rising faster in transportation than in any other sector, and fastest of all in developing countries.

Developing cities and countries are in a difficult situation. They must accommodate the intense desire for personal mobility while mitigating the heavy economic, environmental, and social costs of motorization. For countries such as India and China, which look to automotive manufacturing as a pillar of economic development, the challenges are even more intense.

The good news is that many opportunities exist to mitigate the adverse effects of motorization while still allowing personal transport to spread. Moreover, many strategies to manage motorization in developing countries respond to a variety of concerns that are locally compelling, including high roadway costs, stifling traffic congestion, and worsening air pollution. Developing countries confront choices regarding the timing, extent, and form of motorization. Those choices will have a great long-term impact on the quality, pace, and sustainability of their development. Fortunately, too, the strategies needed to respond to local concerns are largely consistent with those needed to respond to the global concerns of petroleum use and climate change.

Car talk

Motorization is soaring virtually everywhere. The number of motor vehicles in the world is expected to reach about 1.3 billion by 2020, more than doubling today’s number. The fastest growth is in Latin America and Asia.

These figures and forecasts, like almost all published data on vehicle ownership, do not include motorized two-wheelers. China alone has more than 50 million scooters and motorcycles. The costs of these vehicles are low and dropping. New mopeds (with engines under 50 cubic centimeters) and small motorcycles can be purchased for as little as $200. They are found throughout much of Asia and are starting to spread to Latin America. The proliferation of these low-cost scooters and motorcycles is accelerating the motorization process in the developing world. They encourage an early leap from buses and bicycles to motorized personal travel. No longer do individuals need to gather considerable savings to buy a vehicle. In Delhi, where the average income is less than $1,000 a year, close to 80 percent of households have a motor vehicle, most of which are two-wheelers.

The benefit of these motorized two-wheelers is expanded access to personal mobility; the downside is more pollution, more energy use, and further undermining of public transport services. Public transport is heavily subsidized in almost all cities because of its large positive externalities (reduced need for roadways and reduced congestion) but also to ensure access by poor people. Nevertheless, many poor people still cannot afford transit services. Thus cities face pressure to keep fares very low. But in doing so, they sacrifice bus quality and comfort. Middle-class riders react by buying cars as soon as they can. With low-cost scooters and motorcycles, the flight of the middle class is hastened, transit revenues diminish, and operators reduce quality further as they serve a poorer clientele. Although the quality of service suffers first, a decrease in quantity of service often follows. This hastened departure of riders is creating even greater pressure on cities to manage public transport systems better. In virtually all cities in the world, in industrial as well as developing countries, public transit is losing market share.

Motorization’s enormous stress on city development and finances is pivotal. A study by the National Research Council asserts, “with very few exceptions, rapid growth in demand for motorized transport has swamped transport [infrastructure] capacity in the cities of the developing world.” The World Business Council for Sustainable Development, in the first commissioned report of a multimillion-dollar study on sustainable mobility, warns: “The major challenge in the developing world is to avoid being choked–literally and figuratively–by the rapid growth in the number of privately owned motorized personal-transportation vehicles . . . [Personal mobility] is deteriorating in many areas where it had been improving in the past.” Many cities in developing countries, with a fraction of the car ownership of the United States, now experience far worse traffic congestion and pollution than exist in the United States.

The roadway construction and financing challenge is not just one of economics and financing. It is also a political and social issue. Only a small minority of the population in developing-world cities owns cars and benefits from massive road-building budgets; in contrast, the vast majority suffer from increasing traffic congestion, noise, and pollution. In cities with many motorized two-wheelers, the vehicle user population is larger but still a small share of total travelers. Destruction of neighborhoods to build new expressways is starting to spark social unrest, as it did in the United States in the early 1960s.

International development banks and local privatization are playing an increasing role in financing facilities and services. There is a reluctance to finance expensive rail infrastructure, but money for roads and bus systems is readily available. Many parts of the developing world, particularly in Latin America, are selling roads, ports, railroads, and other facilities, or sometimes just the operating rights, to private companies as a means of financing the operation and expansion of new and even existing facilities. Even China is relying on tolls to finance intercity roads. Although privatization is an attractive solution to the funding woes of developing country governments, it creates a new mix of winners and losers that merits close scrutiny.

Another adverse effect of motorization that is attracting the attention of local policymakers is air pollution. Motor vehicles play a central role, accounting for about half the pollution, even with very low rates of vehicle ownership. Cities such as Santiago, Mexico City, Beijing, Katmandu, and Delhi are now aggressively imposing new rules and laws to reduce air pollution. Most are eliminating lead from gasoline so as to facilitate the use of catalytic converters (and reduce the health hazards of lead) and are accelerating the adoption of vehicle emission standards already in place in industrial countries. The prognosis is reasonably positive, because in many cases air pollution can be reduced largely with technical fixes at relatively modest cost (thanks largely to the flow of technical innovations from the industrial world). Large international automotive and energy companies are key to this.

Bus rapid transit is viewed as perhaps the most important transportation initiative today.

More troublesome, because the solutions are not obvious, is petroleum use. Motorization leads to sharp increases in oil use. In most of the developing world, cars use about six times as much energy as buses per passenger-kilometer, and about twice as much as a small modern motorcycle (with a four-stroke engine). These ratios can vary considerably, mostly depending on the level of ridership.

Soaring oil use is not a compelling problem to local policymakers but is of great concern to national governments and even more so to the global community. The global transportation sector is now responsible for almost one-fourth of worldwide carbon dioxide emissions. The International Energy Agency projects that oil use and GHG emissions from developing countries will grow three times faster than emissions from the United States, Europe, and Japan over the next 20 years. Others project an even greater differential.

Overall, about half of all the petroleum in the world is used for transportation. Thus, greater transportation energy use translates directly into greater vulnerability to supply disruption, greater pressure on Middle Eastern politics, and greater emissions of carbon dioxide, the principal GHG. Although the transport sectors of countries such as China and India are still small contributors, with relatively few vehicles per capita, their emissions are increasing at a sharp rate. In China, for instance, transport accounts for only 7 percent of GHG emissions. In cities such as Shanghai, however, four- to sevenfold increases are anticipated in the next 20 years.

The challenge for these cities is heightened by the fact that uniform prescriptions do not work. Motorization patterns vary widely across the globe, particularly among developing countries. In some Asian cities, for instance, conventional trucks, buses, and cars account for only 5 percent of vehicles, compared with 60 percent in others. In Delhi and Shanghai, roughly two thirds of vehicles are motorized two- and three-wheelers, whereas in African and Latin American countries, almost none are. In South Africa, minibus jitney transportation accounts for fully a third of all passenger-kilometers of travel, but in others it plays a negligible role. Shanghai has 22 cars per thousand residents, whereas much poorer Delhi has nearly three times as many. Numerous factors influence motorization. Income is the most important, but other factors more readily influenced by public policy and investments are also important. Motorization can be managed.

Priority in foreign assistance should be given to projects that enhance nonmotorized travel, transit services, and vehicle technology.

Although a few cities have coped well, most have not. The challenge of dealing effectively with rapid population growth, rapid motorization, and large groups of low-income travelers would be difficult for cities with substantial financial resources and strong institutions. For developing cities with limited funds and planning expertise–and with inexperienced institutions–effective transportation planning, infrastructure development, and policy implementation are extremely difficult. In many cases, the problem is lack of political will, compounded by lack of money and effective institutions.

In Delhi, for instance, the Supreme Court of India responded to a lawsuit alleging a failure of local governments to protect people’s health. It intervened with a variety of controversial directives, including a requirement that all buses and taxis convert to natural gas. These directives were not the result of a careful assessment of options, and they focused on technical fixes rather than more fundamental shifts in behavior and land use. The immediate result was bus shortages and violent demonstrations. These policies reflected a mood of desperation about air pollution and an exasperation with existing metropolitan institutions. Buenos Aires, had a similar problem and found it politically impossible to pass a law to form a metropolitan transportation planning organization. In that case, the city successfully procured a loan from the International Monetary Fund to build bottom-up cooperative relationships between transportation stakeholders through small projects.

The timeline for transportation system development in today’s developing countries is compressed compared with that of cities and nations that have already completed the process. The rapid speed of development creates pressure for substantial investments within a relatively short period. Finding the resources to finance the needed infrastructure investments and the expertise to manage the growth is a challenge in many parts of the developing world.

Leapfrogging is not the answer

Transportation systems are highly fragmented, with a diverse set of technologies and a diverse mix of public and private investors, managers, and users. Frustrated policymakers often turn toward technology fixes, because they generally require less coordination and less behavioral and institutional change.

Leapfrog technologies–advanced technologies that allow developing countries to go beyond what is now typically used in industrial nations–are the highest-order technical fix. Why not skip over the relatively dirty and inefficient internal combustion engine, the large fuel production and distribution infrastructure associated with petroleum, and the chaos of “unintelligent” roads and transit systems? In the telecommunications industry, cellular phones are replacing wires as the physical equipment needed for communication all over the world. In developing countries, this is making it easier than ever for people to connect to each other and to the rest of the world, leapfrogging past the need for telephone lines.

Some advanced transportation technologies are already being pursued in developing nations. Electric bicycles and scooters are being used in China and a number of other countries to reduce urban air pollution. Some cities are switching buses, taxis, and other vehicles to natural gas. Still others are about to experiment with fuel cells. Shanghai is building a maglev train from the airport to downtown, employing German technology that failed for 25 years to find a market in developed countries. Information technologies are being used to control roadway congestion and collect tolls in many developing-country cities. And some small innovations such as inexpensive emission-control devices are being developed using local materials.

In the end, though, the case for a leapfrog approach is far less compelling in transportation than it has been in telecommunications. Advanced transportation technology does not harbor any solutions that will revolutionize the way people and goods get around. Some fuel, propulsion, and information technology (known as intelligent transportation system, or ITS) options are currently available, and their deployment could be accelerated, generating modest emissions or energy savings. But generally speaking, they tend to be more costly than conventional petroleum combustion technologies and, in the case of ITS technologies, require huge financial and institutional investments. Advanced transportation technologies are clearly an attractive option in developing countries, but great care must be given to adapt to the setting, anticipate unexpected costs, and provide the expertise and institutional investments to implement these technologies successfully.

Perhaps the most talked-about leapfrog technology is fuel cells. They are more energy efficient and less polluting than internal combustion engines, and potentially cost-competitive. But they illustrate well the leapfrog challenge. They are far from cost-competitive today. So any country seriously contemplating a leap to fuel cells would need to invest many billions of dollars in its domestic automotive industry, or await investments from foreign companies. Fuel cell vehicles are not expected to be mass-marketed before 2010 in affluent industrial countries and thus could not leapfrog to developing countries for at least 15 years.

The temptation to embrace leapfrog technologies is seen in the experience of the Global Environment Facility (GEF). Established as a multilateral trust fund by the United Nations and World Bank, the GEF for many years shied away from transport, uncertain how to proceed. That changed in the late 1990s with an allocation of $60 million to a fuel cell bus initiative, funding pilot projects in Mexico City, São Paulo, Cairo, New Delhi, Shanghai, and Beijing. Delivery of about 50 buses was scheduled to begin in 2002. Such projects have consumed most of the resources allocated to transportation. The GEF is now exploring other strategies more seriously.

Take the bus

Novel policies, investments, and technologies are not needed. There are plenty of examples of effective initiatives around the world, many of them pioneered in developing countries (see box). What is missing in most cities are commitment and public resources.

Bus rapid transit is viewed as perhaps the most important transportation initiative today, not only in Asia and Latin America but also in the United States. It involves a variety of measures that enhance bus performance. The primary characteristics of bus rapid transit systems include some combination of segregated bus lanes, techniques to hasten boarding and alighting, priority given to buses at intersections, and effective coordination at stations and terminals. The motivation is to emulate rail transit without the high cost. Indeed, a few bus rapid transit operations have been able to move almost as many passengers in one bus lane as on one rail line (about 35,000 passengers in each direction), and at a fraction of the cost. Rail lines in urban areas typically cost over $100 million per mile in developing countries, whereas bus rapid transit costs less than one-tenth as much.

Bus rapid transit achieves high speed by operating on exclusive rights-of-way and giving signalization priority to buses when they intersect other traffic (using onboard transponders). In the more sophisticated systems, buses move in convoys through city centers. These systems achieve fast loading and unloading by elevating platforms to the same level as the bus floor and by collecting fares off board in order to allow simultaneous and rapid entry and exit from extra-wide bus doors.

For almost two decades, the only successful example of bus rapid transit was in Curitiba, Brazil, though many elements of that system were also found elsewhere. Europe had many exclusive busways and tram and bus signal prioritization, but other features were missing. By the 1990s, however, major bus rapid transit systems in Quito, São Paulo, Nagoya, Ottawa, Pittsburgh, and a growing number of cities around the world were using bus rapid transit. By providing high capacity and high speed, these systems attract more riders and provide service more efficiently than conventional bus services operating in mixed traffic.

Steering away from trouble

As motorization overwhelms cities of the developing world, the challenge for public authorities is twofold: enhance the attractiveness of collective and nonmotorized modes and reduce the impact of personal vehicles. The United States can assist developing countries in forging and implementing sustainable transportation strategies in a variety of ways, emphasizing approaches that recognize and align with local needs and priorities. These efforts should engage many institutions and elements of U.S. society. Enhanced efforts are needed in the following areas:

Private investment and technology transfer. The vast majority of resource flows from industrial to developing countries comes through private investment. Efforts should be undertaken to encourage stronger investment in efficient and environmentally beneficial technologies, including production of clean transportation fuels and vehicle technologies. Apart from broader concerns about investment risk in developing countries, innovative transportation strategies face additional barriers, such as high initial capital costs. One potential mechanism to help overcome perceived investment risks would be a public-private investment fund established by the Overseas Private Investment Corporation, targeted specifically to transportation needs in developing countries. A transitory fund that uses government funding to leverage private capital could mitigate financing risk and serve as a bridge to longer-term financing through private or multilateral lenders. Also, small programs at the California Energy Commission and U.S. Department of Energy to assist private companies investing in energy-efficient technologies in developing countries could be expanded.

Multilateral and bilateral government support. Working through existing institutions, the United States should increase government lending and assistance for sustainable transportation strategies. For instance, it should work with multilateral lenders to increase financing for such projects and should support these efforts by making technical and planning expertise within federal agencies available. The government also should commit adequate and sustained funding for the GEF, which serves as the funding vehicle for various multilateral environmental agreements. Priority should be given to projects that enhance nonmotorized travel, transit services (such as bus rapid transit), and vehicle technology (such as facilitating pollution reduction by eliminating lead and reducing sulfur in fuels).

Capacity building. Perhaps the most important outreach from the United States could be to help strengthen the capacity of developing countries to analyze and implement transportation strategies and to integrate them with land use and broader sustainable development strategies. These efforts need not be undertaken exclusively or even primarily by government entities. The private Energy Foundation and the Packard Foundation, for instance, fund U.S. experts to work with government officials and nongovernmental organizations in China to develop energy standards and test protocols for various products, including motor vehicles.

Training of professionals and researchers by U.S. universities also plays an important role in capacity building and technology transfer. Historically, U.S. universities drained the top students from developing countries, but that is becoming less true. Many students are returning permanently or through various collaborative ventures. Increasingly, U.S. universities are forming alliances with those in developing countries and participating in various cross-training and technology transfer programs. More could be highly beneficial, with funding from private foundations.

The U.S. ability to encourage change in developing countries is being compromised by its stance on greenhouse gas emissions.

Other potential partners in capacity building could include large automakers or other major international companies. Many companies have the resources to assign and fund technical staff to assist in traffic management and in environmental, energy, and safety regulation. Because these companies have a significant stake in these newly emerging markets, safeguards against undue conflicts of interest would be necessary.

In the end, the United States, as the world’s largest economy, energy user, and GHG emitter, has a responsibility to show some leadership. Its ability to encourage sustainable development elsewhere will remain seriously compromised until it demonstrates a genuine commitment to addressing its own GHG emissions. Through the 1992 Framework Convention on Climate Change (to which the United States is a party) and the subsequent Kyoto Protocol, industrial countries have committed to the global promise that, having generated the bulk of GHG emissions to date, they must take the first steps toward emission reduction. The U.S. withdrawal from Kyoto and the Bush administration’s adoption of a climate strategy that allows substantial continued growth in U.S. emissions underscore the perception in developing countries that industrial countries have yet to deliver on that promise.

With or without the Kyoto Protocol, the United States can pursue a suite of well-known policy options for curbing transportation-related emissions in the United States, including improving vehicle efficiency through standards, taxes, and tax credits; promoting low-carbon and renewable fuels; creating innovative transit services suited to prevailing suburban land development patterns; using information technologies and other innovations to encourage intermodal connections with conventional bus and rail transit; and discouraging single-occupant driving.

Ultimately, the most cost-effective tool for reducing emissions is likely to be a trading system that caps emissions, either by sector or economy-wide, and allows companies to buy and sell GHG credits. The United States should create the domestic framework for such a system, making it as compatible as possible with other national trading systems and the international trading system established under Kyoto. An effective trading system could prove to be one of the most powerful means of facilitating private investment in sustainable transportation in developing countries.

A related opportunity is the Clean Development Mechanism (CDM) established under Kyoto, which allows developing countries that are hosting emission reduction projects to market the resulting emission credits. One promising approach would be to recognize sector-based efforts. For instance, a comprehensive program to reduce transportation-related emissions in a given city or country could be recognized for crediting purposes through CDM or a CDM-type mechanism linked to a domestic U.S. trading system. Such an approach would provide a strong incentive to both U.S. companies and developing countries to support more sustainable transportation choices.

The United States can do a great deal to support sustainable transportation in developing countries. Fortuitously, many strategies and policies aimed at solving problems there can at the same time address global concerns about climate change and petroleum dependence. It is unlikely, though, that such assistance alone could ever be sufficient to the need.

The United States can in the long run be far more influential by launching credible efforts at home–to reduce transportation oil use and emissions and to tackle climate change more broadly–and by creating strong incentives to engage the private sector in these efforts. As the world’s largest market for motor vehicles and other transportation services, the United States to a large degree drives the pace and direction of transportation technology development worldwide. Policies that reduce greenhouse gas emissions from the U.S. transportation sector will have a significant spillover effect in the developing world, both in generating cleaner technology and in shifting the orientation of multinational auto manufacturers.


Transport Success Stories in Developing Countries

Singapore

Singapore is a small, relatively affluent country with low car ownership and extensive, high-quality transit service. In the 1950s, Singapore had a high motorization rate for its income, a relationship its leaders explicitly set out to reverse. Singapore restrained vehicle ownership and use, invested aggressively in public transit, and controlled land use development.

Investment in bus and rail transit has been substantial. The rail transit network was carefully designed in coordination with land use development plans. Stations are located near 40 percent of all businesses and within walking distance of 30 percent of all Singaporeans. The government also strongly discouraged car ownership and use. A very high additional registration fee (ARF) was imposed on vehicle purchases until 1990, when it was replaced by an auction system. At its height, the ARF reached 150 percent of the vehicle’s market value; the bid price for the right to purchase a vehicle under the current system is similarly high. In parallel, vehicle usage has been restrained with high road taxes and parking fees. Until 1998, drivers entering certain areas of the city were required to purchase an expensive license, which was then replaced by electronic road pricing. Singapore emerged from poverty in the 1950s to be one of the most affluent countries in the world, with among the highest quality-of-life ratings and with very low transportation energy use and GHG emissions for a country with its income level.

Shanghai, China

Shanghai most closely reflects Singapore, but at an earlier stage of development and on a much larger scale (16 million versus 4 million people). Shanghai has a sophisticated planning organization that coordinates transportation decisions with other land use and city planning policies. The municipal government has considerable control over land use and can coordinate housing and transit investments in a way that is impossible in many other parts of the world. It has built grade-separated lanes for bicycles and slow-moving scooters along most major roads and separate sidewalks for pedestrian traffic, and is building an extensive rapid transit rail system to serve new satellite cities. Shanghai is executing an ambitious plan to decentralize the extremely crowded city, with coordinated investments in rail transit and major highways. From 1991 to 1996, Shanghai spent approximately $10 billion on transport infrastructure, including two major bridges, a tunnel, an inner ring road, and the first line of its new subway system. It has also adopted strong disincentives for car ownership, including high taxes on vehicles and registration caps.

Curitiba, Brazil

Curitiba is a superb example of policy coordination, in this case between land use planning and public transit investments. This is one of the few cities in the world that has implemented a linear pattern of development together with an efficient transportation system. Buses efficiently serve the entire city with a hierarchy of routes, including feeder routes and a limited number of dedicated routes for double-articulated buses (extra-long buses that bend). Development was strongly encouraged along the dedicated routes. At the same time, much of the city center was converted to pedestrian-only streets that are most easily accessed by public transit. From the mid-1970s to the mid-1990s, bus ridership increased more than 2 percent a year . During that time, every other Brazilian city and most cities elsewhere in the world experienced significant declines.

Chile

Unlike many developing countries, Chile has already made radical structural changes in its transportation system. It has one of the most sophisticated efforts to transfer transportation infrastructure and services provision to the private sector. In 1990, in response to long periods of deferred investment, the government launched an ambitious franchising program for roadways and freight railways. Today, all the main highways in Chile are built, financed, and operated by private companies. In the future, smaller roadways and even urban streets may be privatized as well. Freight railways or the right to use the tracks have been sold to private operators, resulting in greatly increased business on the affected lines. The overall effect has been far greater investment in transportation facilities than could have been provided by cash-strapped government agencies.

Bogotá, Colombia

In the 1990s, Bogotá implemented effective programs to simultaneously restrain vehicle ownership, improve conditions for walking and biking, and enhance bus transit. In the late 1990s, the government opened two lines of a planned 22-corridor bus rapid transit system (modeled after Curitiba’s), built 200 kilometers of a planned 300-kilometer network of bike lanes, expanded numerous sidewalks, added a 17-kilometer pedestrian zone, and implemented a number of demand management measures. Cars with license plates ending with one of four numbers were not allowed to operate within Bogotá during the morning and evening peak, parking fees doubled, gasoline taxes were increased 20 percent, and bollards were built on sidewalks to prevent people from parking illegally. All these measures were boosted by occasional car-free days, car-free Sundays, and other promotional efforts. In the first four years, the percentage of trips made by private cars and taxis dropped from 19.7 percent to 17.5 percent, and bike trips increased from 0.5 percent to 4 percent of all trips.

Fall 2002 Update

International ecosystem assessment now under way

In “Ecosystem Data to Guide Hard Choices” (Issues, Spring 2000), I discussed the rationale for a new international scientific assessment focused on the consequences of ecosystem change for human well-being and described the proposed Millennium Ecosystem Assessment (MA). The assessment is motivated by the great changes humans are making in global ecosystems, along with the growing demands for goods and services from these ecosystems. To meet these demands and to prevent and eventually reverse ecosystem degradation, we can no longer manage biological resources sector by sector but must instead consider the consequences of actions on the multiple goods and services provided by ecosystems. Decisionmakers increasingly require integrated and forward-looking information to help guide the complex choices that they face.

The MA was a proposed response to these information needs. Modeled on the Intergovernmental Panel on Climate Change (IPCC), the MA was designed to help meet decisionmakers’ needs for information on ecosystems and human well-being and to build capacity within countries to undertake similar assessments and act on their findings. The MA received the endorsement that it needed from governments, as well as significant financial support in 2000, and in June 2001 it was formally launched by United Nations Secretary General Kofi Annan. The MA has been authorized by three international conventions–on biological diversity, wetlands, and desertification–as one source of their assessment input. A distinguished 45-member board represents the various users of the MA. An assessment panel of 13 leading social and natural scientists has been established, along with four working groups, each involving 30 to 80 coordinating lead authors. More than 500 lead authors are now being invited to join these working groups, and an independent review board is being established. The first product of the assessment, Ecosystems and People: A Framework for Assessment and Decision-making, will be published early in 2003, with the main assessment and synthesis reports planned for release in late 2004.

Major financial support for the MA has been provided by the Global Environment Facility, United Nations Foundation, David and Lucile Packard Foundation, World Bank, United Nations Environment Programme, and the governments of Norway and Saudi Arabia. Significant in-kind contributions have been made by China, Japan, Germany, the Netherlands, and Sweden. In addition, the U.S. government has made nearly $5 million worth of Landsat-7 images available to the MA. These images will provide governments and researchers around the world with invaluable baseline information on land cover at the turn of the millennium.

Now, one year into the assessment, three aspects of the process are proving to be particularly interesting. First, the multiscale structure of the MA has attracted considerable interest in countries around the world and promises to be one of the most influential components of the process. The MA is not just a global assessment but a variety of assessments being conducted at every geographic scale, from local communities to subcontinents to the globe, with methodologies being developed to link these into a multiscale framework. Assessments at subglobal scales are needed because ecosystems are highly differentiated in space and time and because sound management requires careful local planning and action. Local assessments alone are insufficient, however, because some processes are global and because local goods, services, matter, and energy are often transferred across regions.

Considerable interest exists around the world in taking part in these subglobal assessments, even though the MA is able to provide only modest seed money for these activities. Subglobal assessments (local, national, regional, or multiscale) are now underway in Norway, western China, southern Africa, Southeast Asia, India, Papua New Guinea, Sweden, and in a network of tropical forest sites through the Alternatives to Slash and Burn project of the Consultative Group on International Agricultural Research. Additional subglobal assessments are being designed in Chile, Peru, Saudi Arabia, Egypt, Indonesia, the Philippines, Canada, and eastern Russia. The European Environment Agency will also be using elements of the MA methodology in its upcoming European environment report, so that it too can contribute to the overall MA process.

Second, the MA will be the first global assessment to incorporate traditional local knowledge and western scientific knowledge in its findings. The importance of local knowledge in informing management choices for ecosystems is clear, yet the standard protocols for scientific assessments make it difficult to incorporate this type of information into assessment products. In addition to the development of methods for linking assessments across scales, the MA is also attempting to develop methods for linking different epistemologies within each scale. These two issues will be the focus of an international conference planned for 2003 in Kunming, China: Bridging Scales and Epistemologies: Linking Local Knowledge and Global Science in Multiscale Assessments.

Finally, drawing on the findings of research conducted by the Global Environmental Assessment project at Harvard, the MA is striving to maintain a high level of engagement and interaction with its various audiences. Experience from past assessments has shown that the incorporation of scientific findings into decisionmaking processes is aided through a process of continuous dialogue between the assessors and the users of the findings. This ensures that the assessment is responsive to user needs and strengthens the legitimacy of the process.

The MA faces a daunting challenge in this regard, because it seeks to meet the needs not just of three different conventions but also of users in the private sector and civil society, who often have as much influence on ecosystems as government policymakers. In order to reach these diverse stakeholders, the MA is working with institutions in a number of countries to establish user fora at a national scale. Working with the World Business Council on Sustainable Development and with industry associations and individual firms, the MA is now planning a series of workshops in 2003 to fully engage the private sector in the process. And to strengthen the engagement with the scientific community, more than 15 of the world’s national academies have now become partners with the MA to help with the review process and the outreach of the final products.

The impact of the MA will depend in part on whether it improves decisions, stimulates action, and builds assessment capacity. But it also depends on whether a mechanism for regular integrated assessments can be institutionalized within the intergovernmental framework after the completion of the MA. Governments, the private sector, and civil society will have growing needs for information and guidance from science as we pursue the United Nations’ Millennium Development Goals of reducing poverty, improving health, and ensuring environmental sustainability. Sector-by-sector assessments such as the IPCC must continue, but it may now be time to build on the experience of the MA and establish an assessment mechanism that can bring the findings of sustainability science to bear on the critical, synthetic, and complex challenges that must be solved to achieve the Millennium Development Goals.

Walter V. Reid

Advanced Technology Program survives challenge

In “The Advanced Technology Program: It Works” (Issues, Fall 2001), I argued that the Advanced Technology Program (ATP) had proven its success. The research carried out under the National Research Council (NRC) review of government-industry partnerships had found the program to be well conceived and well managed. Reviews of the awards made suggested that this highly competitive program was successfully addressing an important aspect of the U.S. innovation system. Despite the inherent risk associated with high-risk, high-payoff technologies, the program had established a record of achievement. Indeed, one of the effects of the political debates surrounding the program in the mid-1990s was the development of a widely acclaimed evaluation program.

At the time the article appeared in Issues, the program’s future seemed to be in doubt. The incoming administration had suspended new awards and recommended a $13 million budget, sharply down from previous year’s funding of approximately $146 million. The Senate Appropriations Committee responded with a recommendation for a $204 million appropriation. (The final 2002 budget was about $185 million.) The committee found that extensive rigorous assessments had revealed that the ATP does not fund projects that would have been financed in the private sector but focuses on “valley of death” projects that the private sector is unlikely or unable to fund on its own. The committee also endorsed the principle that the government should play a role in choosing promising technologies to fund.

The second major development concerned the decision by the Department of Commerce to review the program. In a February 2002 report entitled Reform with a Purpose, Commerce Secretary Donald Evans proposed six program reforms for Congress to consider. In so doing, the department endorsed continuing the program, albeit with less money than the Senate Commerce Committee recommended, thus putting to rest any fears that the program would be eliminated. A number of the reforms recommended by the administration, such as increased program emphasis on cooperation with universities, corresponded with recommendations made in the NRC assessment. One problematic recommendation called for recipients of ATP awards to “pay an annual royalty to the federal government of 5 percent of any gross revenues derived from a product or an invention . . . created as a result of ATP funding.” This “recoupment” proposal met with a chilly reception on Capitol Hill.

As noted by one critic, it suffers from the “one invention/one product myth,” which seriously understates the complexity of the innovation and commercialization processes. Implementing the program would be an accountant’s nightmare, and it would likely drive away many of the businesses that now participate. Perhaps the key point is not just that it would be hard to implement, but that it is unnecessary. We already have a recoupment program; it’s called the tax system.

The good news is that the program has at last been recognized for the quality of its operations and its rigorous assessment activities. In fact, officials from other governments have been coming to study the program, looking for lessons they can apply at home. And here at home, Congress has mandated that the nation’s other major award program, the $1.3 billion Small Business Innovation Research Program, should be subject to similar independent assessment, suggesting that ATP has not just funded innovators, it has become an innovator.

Charles Wessner

Fighting Traffic Congestion with Information Technology

Traffic congestion is a vexing problem felt by residents of most urban areas. Despite centuries of effort and billions of dollars worth of public spending to alleviate congestion, the problem appears to be getting worse. Between 1980 and 1999, vehicle-miles of travel on U.S. roadways grew by 76 percent, while lane miles increased by only 3 percent. Average daily vehicular volumes on urban interstates rose by 43 percent between 1985 and 1999, from 10.331 million to 14.757 million. In a study of 68 urban areas published in 2001, the Texas Transportation Institute reported that the percentage of daily travel taking place during congested periods increased from 32 percent in 1982 to 45 percent in 1999; typical motorists faced seven hours per day of congested roadways in 1999 compared with five hours in 1982. According to the Federal Highway Administration, road delays (defined as travel time in excess of that at free flow conditions) increased by 8.5 percent between 1993 and 1997. Congestion also pollutes the air and wastes precious fuel.

Despite the exasperation that traffic congestion causes, most people know surprisingly little about it or what can be done about it, and much of what is stated in the media is oversimplification. We live in a society in which, for political and social reasons, we consistently label congestion a major problem to be solved but find it unacceptable to adopt the most effective solutions. Indeed, the political debate over the issue indicates that we actually prefer the problem to the solutions. If our current path continues, in the coming years we will implement innovations to mitigate worsening traffic and expand the transportation system to accommodate growth in travel to some extent, but we will likely shy away from measures that will literally cure the problem.

There is one factor, however, with the potential to change the course that we are on: information technology. There are a wide variety of applications of information technology that are just beginning to be implemented that could be far more significant in our struggle to defeat traffic congestion than the building of new highways and transit routes or more government regulation. In fact, we now have the technical means to finally “solve” the congestion problem.

Mixed blessing

Although we always label congestion a problem to be solved, it is surely not all bad. In the United States, worsening traffic congestion is most often associated with prosperity rather than poverty and with growth in population and business rather than decline. Congested city centers are usually the most exciting and high-rent of all urban environments, home to dynamic industries, tourist attractions, and cultural activities. Traffic congestion becomes less pronounced during recessions, and stagnant rust belt cities would willingly trade high unemployment rates and vacant industrial tracts for some troublesome traffic congestion. When and where it reaches very high levels, traffic congestion can become self-correcting; for example, when businesses choose to leave an area because it is too crowded and plagued by delays.

Politicians, not surprisingly, want to have their cake and eat it too. They want the growth and economic vitality that bring congestion, yet they also want to control or reduce that congestion. They worry that congestion will kill the goose that laid the golden egg by slowing growth and driving investment elsewhere, but refuse to implement effective strategies to relieve congestion because stringent solutions might, like congestion itself, redirect growth to other areas. Although technical experts could actually solve the problem of congestion, their solutions are politically unacceptable because they threaten economic growth along with congestion. In theory, automobiles could be banned from sectors of city centers; bridge tolls could be raised to such high levels that they would reduce traffic backups; and taxes on gasoline could be made so high that people would increasingly use mass transit and cycling. But such strategies could not be adopted in the United States and would stifle the economic growth and cultural activity that are considered the greatest successes of our society. Would we really vote for emptier streets if they meant fewer bargains at stores, closed movie houses, and higher rates of unemployment?

The notion that growing traffic has to be accommodated rather than stifled has been the motivation for innovations by private entrepreneurs and public officials over many centuries. The more successful of these have indeed reduced or eliminated congestion in some ways and for some time, but eventually cities have grown and readjusted to create a new equilibrium that includes new and perhaps different patterns of congestion. Then these are again identified as serious problems in need of repair, and new solutions are proposed. That process continues today, and although congestion has never actually been permanently alleviated by any of these innovations, they have surely improved the quality of urban life by supporting the expansion of diverse activity centers.

Policymakers usually base their recommendations on statements about congestion that consistently and dramatically oversimplify reality. In some cases, the beliefs that motivate policymaking may actually be dead wrong. Do we really know the extent to which citizens worry about traffic congestion or see it as a serious public policy problem? The evidence is confusing at best. Residents of the San Francisco Bay area recently rated urban traffic congestion as the single most important problem affecting their quality of life, even more important than public education or crime. This is consistent with research findings indicating that driving in heavy traffic is stressful, as measured by elevated blood pressure, eye pupil dilation, and the occurrence of incidents of road rage. On the other hand, there is also recent research showing that many people find driving to be a relaxing interlude between their many other stressful activities. Survey research recently has shown that a substantial proportion of drivers would actually prefer to spend more time traveling each day than they presently do. Presumably, a diversity of personality types and differences in our attitudes based on the time of day at which we travel and the purposes of our trips mean that it is difficult to generalize.

Press releases from transportation agencies and political leaders frequently speak of tens of millions of dollars in annual “costs” associated with congestion in metropolitan areas. Where do such numbers come from, and what do they mean? These estimates come quite simply from multiplying aggregate hours of delay by some dollar figure such as a “typical” hourly wage rate: A million hours of delay per year times $10 per hour yields a cost of congestion of $10 million, a dramatic figure quickly reported by the news media. But it is not at all clear that this number has any meaning. Some drivers, like those behind the wheels of commercial vehicles, are indeed paid wages for time they spend on the road, but most are not. And if we could produce a miracle that would enable us tomorrow to spend much less time in congested traffic than we did today, would we actually convert the saved time into labor that would produce added income? For most of us the answer would be no, so the wage rate may be a meaningless way to value congestion. If we used the saved time to mow the lawn or go for a jog, the time saving would certainly have value, but is that value appropriately expressed by a wage rate?

It is similarly not clear that if one citizen loses 10 minutes a day whereas another loses 100 minutes to congestion, that the second person’s loss is worth 10 times that of the first person. We may not be willing to pay anything to save 10 minutes per day but willingly pay to save 100 minutes, so the value of time may be quite nonlinear, complicating the situation greatly.

Although “smart growth” does reduce overall auto use, it does so by creating congestion rather than relieving it.

As we learn more about travel behavior, we have begun to understand that travelers are more interested in the predictability of the time that a trip takes than they are in the average length of trip time. In other words, people are not likely to complain as much if a trip takes them on average 45 minutes instead of 30 minutes, but they are likely to be quite concerned if it takes 15 minutes one day and 45 minutes the next. To avoid being late to work or to an important appointment, we must plan a trip to allow for the longest travel time that can reasonably be expected rather than for an average travel time. Aggregate hours of delay may very poorly measure what is most important to people about traffic congestion, and attaching dollar values may obfuscate rather than clarify the issue. Census data show us that the median journey from home to work in the United States is increasing by only a few minutes per decade, even though cities are spreading out considerably. People in the suburbs travel longer distances between home and work than do those in the inner city, but generally they make those trips at higher speeds, so travel times are growing very slowly. In the face of this evidence that typical travel time is hardly growing, it is probably our concern with the variance or reliability of travel time that explains our growing concern about traffic congestion. Interestingly, although variance is more important than median travel time, we collect data on the median and report nothing to the public about the variation.

Policymakers also have a poor understanding of the mechanics of traffic congestion, which is highly localized in time and space. Well over 90 percent of our roads are uncongested for well over 90 percent of the time. Some congestion–indeed, up to a third of all traffic delay–is caused by incidents that are difficult to predict, such as accidents, spilled loads, or construction equipment. Recurrent congestion, caused by demand outstripping capacity, occurs mostly at busy activity centers and important bottlenecks such as bridges, tunnels, and critical intersections. When overall congestion becomes worse, however, it generally does not become more intense at locations that are already heavily congested; rather, it spreads over longer periods each day and to additional locations. Drivers can often avoid congestion by choosing alternate routes or times at which to travel, but as many people leave earlier or later for work or choose an uncongested boulevard in preference to a crowded expressway, they gradually cause congestion to build at those times and on those alternative routes.

Traffic congestion is also nonlinear, meaning that when volume doubles or triples on a lightly traveled street the effect on travel times is minimal, whereas adding just a few cars and trucks to a crowded roadway causes large increases in delay. This explains why traffic seems to be much worse on the day that school reopens in the fall and to be surprisingly light in New York or Boston on Jewish holidays. Adding or removing only a small fraction of all travelers can make an enormous difference in traffic flow, which makes traffic eminently subject to management strategies. Although congestion is nonlinear, people think in linear ways; congestion on a major bridge leads to calls for another bridge, even though small adjustments could quite dramatically reduce delay.

A long history

Congestion is not a new phenomenon, and every civilization has developed innovative solutions to control or accommodate it. In ancient Rome, the Caesars noted that the passage of goods carts on narrow city streets so congested them that they became impassable and unsafe for pedestrians. A government edict required goods vehicles to make deliveries at night, but this policy was soon overturned because citizens complained that their sleep was interrupted by the sounds of vehicles traversing the pavement and of animals straining under their loads. Charles II of England issued a famous edict in 1660 to ban standing carriages, wagons, and horses from the streets of Westminster and London because they were excessive and were creating a public nuisance. He ordered that they be required to wait for their passengers off the main thoroughfares to enable the traffic to flow more freely on the boulevards.

Industrialization brought urbanization, and 19th-century cities were incredibly crowded places. Most people walked to work or lived above or behind their businesses, and rudimentary horse-drawn public transit was too expensive for most citizens. Population densities in industrial cities were many times what they are today, and urban congestion was then widely understood to mean the crowding of people in limited space. By the late 19th century, the high density of dwelling units, high occupancy of residential quarters, proximity of living areas to working areas, environmental hazards of factories, and transportation systems based on animal power were together defined as congestion. The innovation that addressed this problem was improved public transportation, first on the surface and powered by horses; later elevated or underground and powered by cables, steam, and eventually electricity. Affordable and reliable public transportation meant that people could live farther from where they worked and travel much more. At first, only the rich could move away from the center, but gradually fares fell in relation to incomes, and more and more people could commute to work. At the first national Conference on Planning and the Problems of Congestion nearly 100 years ago, speakers urged lower densities and the deliberate suburbanization of the population. In New York, zoning was introduced in part to lower the land use intensity so as to ease overcrowding. The flat subway fare (meaning that the fare was the same for a 20-mile journey as for a 1-mile trip) was adopted to encourage lower-income people to move out of the city center and new immigrants to locate in outlying neighborhoods, which were considered safer and more healthful than the crowded downtown areas.

Extensive integration of IT with the transportation network is key to managing congestion growth.

As more people moved out of the centers of large cities and relied on public transportation, the perception of congestion changed from crowded neighborhoods to crowded streetcars on tracks so filled with trolley cars that movement was extremely slow. Innovations that helped ease this new form of crowding included the construction of the first urban elevated routes and, just before 1900, the development of underground transit routes, along with the development of signaling systems to control complex flows in the transit networks. Grade separation of vehicles with passengers from pedestrians and horse-drawn goods vehicles provided the capacity for more movement within cities, permitting both growth and decentralization.

Rapid declines over just a few decades in the cost of auto ownership in relationship to worker wages meant that many more people became mobile. Automobiles provided an order of magnitude increase in movement capacity and meant that cities could continue to grow and spread. The most rapid growth rates in automobile ownership and drivers’ license holding occurred between 1910 and the Great Depression, and city streets became very crowded with motor vehicles during that time. Innovations devised during this period by engineers, politicians, and bureaucrats included the widening of roads and the rationalization of street networks by, for example, straightening streets and making them more continuous with one another. Busy intersections gradually came to be managed by signs and mechanical signals that were eventually replaced by electric signals that later were coordinated with one another into systems that accommodated higher traffic volumes. Proposals for access-controlled and grade-separated roadways also originated in this period, but years of depression and war slowed their adoption as automobile ownership and use continued to grow. After World War II, prosperity returned and growth picked up in employment, the economy, and travel. In response to dramatic increases in congestion, the federal government in the 1950s planned, and over 40 years built, a national system of “interstate and defense highways,” encouraging state governments to build more than 40,000 miles of freeways by providing them with more than 90 percent of the money. Roadway capacity for a short while grew faster than motor vehicle travel, so this growth in new capacity seemed to solve the problem of congestion, but population and economic activity also expanded; land use became more dispersed; and, as the statistics in the opening paragraph indicate, over time goods movement and passenger travel have grown to utilize and surpass the capacity of the road network.

During the past 20 years, the costs of new highway capacity have become political liabilities that exceed its benefits. Community disruption, land taking, decentralization of population, production of air pollution, and dependence of the automobile and highway system on petroleum energy sources all limit the likelihood that government policy will emphasize continued expansion of roadway networks. It is now common to say that we cannot build our way out of congestion, because new roads induce new traffic. Whereas decentralization of the city was to another generation the solution for congestion, many today urge that we slow the pace of suburbanization by promoting “smart growth” that includes dense commercial and residential nodes of development at transit stations. Whereas road construction was to another generation the solution to traffic congestion, today it is just as often seen to be the cause of the problem.

The limitations of smart growth

Environmentalists and urban planners have adopted smart growth as the ultimate solution to congestion. They urge that we cluster development near transit stations, increase urban densities, and mix land use, including putting stores and housing together, so that people can live without relying so much on their cars. By redirecting growth back into the city center, they believe that more people will be able to walk and use public transit and that automobile use will decline. This approach appeals to intellectuals, who are often fond of the kinds of environments found in downtown New York, Boston, and San Francisco, and their proposals are exciting for many reasons. Those reasons, however, do not include potential reductions in congestion. In fact, this strategy seems to confuse the solution with the problem. Should we emulate Hong Kong, Tokyo, or Manhattan as the strategy for alleviating congestion?

It is true that low-density environments create more vehicle miles of driving per capita or per household than high-density environments. Without doubt, people are more likely to walk and use public transit in dense, mixed-use urban neighborhoods, but they are likely to do so in part because those neighborhoods are seriously congested. Can congestion be seen as the cure for congestion? Yes, but only in part. A strategy that creates more dense, mixed-use, transit-oriented communities and fewer low-density suburban neighborhoods can reduce vehicular travel in the aggregate, but at the expense of greater congestion in our city cores. A suburban neighborhood that contains five dwelling units per acre might produce 10 person-trips per day per household, which by simple arithmetic means 50 trips per acre per day, few or none of which would be made by walking or public transit. An urban neighborhood with 20 dwelling units per acre might, by contrast, produce only seven person-trips per household, but the same arithmetic shows that this neighborhood would produce 140 trips per acre per day. If 10 or 20 percent of these trips were made by walking or public transportation, the urban neighborhood would still produce more automobile traffic per acre than the suburban neighborhood. In other words, smart growth does reduce overall automobile travel, but it does so by creating congestion rather than relieving it. This is not necessarily bad, but it implies that many planners and environmentalists are disingenuous when they urge us to fight congestion through smart growth. Like the politicians, they really want more congested environments but presumably want that congestion to be somehow managed and accommodated. If it is not accommodated, people will start to move to the suburbs specifically to avoid congestion, and that will create more reliance on automobiles.

Applying information technology

What we choose to do about worsening congestion in the next few decades will be a product of the long and complex history of multiple innovations outlined above and also of the types of innovations and technology that characterize the current era. If history teaches us any lessons, it is that the effectiveness of available technical innovations will be tempered and directed by political priorities and interpretations of what is possible and desirable. Today there is little political will to dramatically expand existing highway networks and little support for extreme measures, such as vehicle restrictions that could control congestion but stifle economic growth. A large proportion of available transportation resources will be needed to maintain, replace, and repair our existing aging highway and transit networks, leaving little money to spend on new roads or expanded transit systems.

At the same time, the major force influencing the world economy in recent years has been information technology (IT). Rapid and extensive integration of IT with the transportation network is already underway and is the key to the management of congestion growth. Thus far, however, the accomplishments are quite modest in comparison with the possibilities.

Travelers today can receive directions to their destinations in their vehicles on handheld computers or by using devices incorporated into their dashboards. Most currently available information is similar to a traditional road atlas in that route information is not yet modified by data on current traffic conditions. For 30 years, traffic and transportation authorities have been gradually incorporating instruments into roadways and vehicles to provide increasingly useful information for managing traffic flows. “Loop detectors” buried under arterial streets and freeways report on traffic density, and the data they collect are being used to estimate speeds and travel times with increasing accuracy. In some cities, these data are being used to optimize the timing of traffic signals in order to maximize flows on segments of street networks. Cameras located on bridges and over busy intersections complement the data collected from the detectors to feed visual images of incidents to traffic control centers from which tow trucks and emergency vehicles can be dispatched when needed.

We now have the technical capacity to integrate into one system the mechanisms for financing roads and controlling congestion.

Thus far, most applications of this technology have enabled us to improve the management of parts of the transportation system in real time on the basis of information on current flows. Because traffic patterns repeat themselves day after day, techniques are emerging that will soon enable us to merge historical data with information taken from the monitoring of current flows to predict traffic patterns with increasing accuracy over the coming minutes and hours. This information will in the near future be made available to potential travelers over the Internet and through cell phones, car radios, or dashboard display screens to those already on the road.

The extent to which the application of IT will allow us to better manage traffic flows to save travelers time and money is in the longer term more likely to be limited by political and social considerations than it is by the technology itself. For example, it is technologically feasible to track vehicle locations and to provide drivers with specific information on the current and projected traffic levels and travel times on several alternate routes. However, concerns about intrusion into personal privacy could limit the use of this innovation.

Because they present fewer challenges to privacy and produce greater gains in efficiency, these technologies will more quickly be applied to trucks and public transit vehicles. Operators of truck fleets and transit operators already use Automatic Vehicle Location (AVL) technology that employs Global Positioning Satellite Systems (GPSS) to keep track of the location of vehicles on city streets. Trucks can be programmed while in service for additional pickups and deliveries based on their current locations, and this type of information is increasingly used to tell bus drivers to bypass certain stops in order to fill gaps in service. Through display terminals at bus stops or through cell phone access, this type of information is also beginning to be used to provide bus users with information on the expected arrival time of the vehicle they hope to board. Such innovations will help us manage traffic congestion, and many believe that applications of “intelligent transportation systems” can accommodate up to half of the growth in congestion that will occur over the coming decades. That’s impressive, but is it enough?

Congestion pricing

In the past, the vast majority of the costs of building and operating transportation systems have been paid through a system of user fees. Tolls are the most direct user fees, with fuel taxes really functioning as surrogate tolls, because they collect money roughly in proportion to how much we drive. When fuel taxes were adopted more than 80 years ago, they were seen as inferior to tolls because they didn’t levy charges at the location and time of travel. But fuel taxes had lower costs of administration; just a few percent of the fuel tax is spent to cover the costs of collecting the money, whereas the cost to operate tollbooths often amounted to a quarter of the tolls collected.

Americans are by and large not even aware that as much as one-third of the cost of gasoline at the pump is a charge (technically a fee rather than a tax) used to cover the costs of building, maintaining, and managing roads and transit systems. Over time, however, improved vehicle fuel economy and political reluctance to raise the price of gas have reduced the fiscal productivity of these fees. In the near future, hybrids, electric cars, and fuel cell-powered vehicles may make fuel taxes obsolete as a source of funds with which to finance the transportation system. This apparent problem could actually be the key to finally solving the problem of highway congestion.

Economists have long argued that the only way to completely solve the congestion problem is through congestion pricing. Economic theory says that the price of traveling should be higher at the places and times of day when demand for (and benefit from) using them is greatest. If it were to cost, for example, three times as much to pay a bridge toll at the period of highest congestion as it does in the middle of the night, some travelers would surely be more likely to use public transit, form carpools, use less crowded alternate routes, or reschedule less essential trips at off-peak hours. It is theoretically possible to eliminate congestion through pricing, because in principle the price can ultimately be raised to a level that is high enough to clear the traffic jam. There are now a dozen or more travel corridors throughout the world where variable pricing for travel is in use, including a small handful in the United States. Congestion pricing has been successfully used in Singapore for more than 25 years, and London is planning to implement such a system early in 2003.

Although transportation experts have written about congestion pricing for decades, one of the major obstacles to its implementation has long been the technical difficulty of collecting tolls: Building toll plazas and varying the charges with time of day and class of vehicle are complex, expensive, and politically problematic tasks. But the recent advances in IT now make congestion pricing much more technically feasible. Small inexpensive transponders, already in use in millions of vehicles to pay tolls, enable each motorist to be charged a different fee to use each segment of road at a particular time of day. The charges can appear on monthly credit card bills. I can envision a future in which the familiar “gasoline tax” is eliminated, especially because gasoline itself may have a limited future as a source of power in transportation. Instead, motorists would be charged more directly for the use of roadways through simple applications of IT.

We now have the technical capacity to integrate into one system the mechanisms for financing our highway system and controlling congestion. Charging more than we now do for the use of the busiest roads at the busiest times of day, and quite a bit less than we now do at other times, would be the fairest and most efficient way to raise the funds needed for operating and expanding the capacity of the transportation system. At the same time, we would use the charges to meter the use of the system to control congestion. Some argue that the accounting system needed for congestion pricing will be an invasion of privacy, but it is possible to prevent this by using numbered accounts. Others argue that congestion pricing discriminates against the poor. Yet the current system of transportation finance is not at all neutral with respect to income, and a system of direct charges for actual benefits gained from using the system is inherently fairer than a complex system of cross subsidies. For many trips, the proposed approach would provide for a lowering of trip costs in comparison with the current means of pricing travel. And it would surely be possible to offer lifeline rates to the poor.

Personal mobility and the transportation system will be deeply affected by IT during the coming decades. Many applications of IT to traffic congestion relief will be the product of innovations by private firms. Within just a few years, for example, and without government intervention, we will be reserving our parking spaces electronically as we approach airports and shopping centers, rather than cruising for an available vacant space.

Using history as a guide, it would seem that we have the technical means at hand with which to finally solve the congestion problem. Thus, the most significant determinants of the future use of IT for traffic control will be political rather than technical. Based on the history reviewed here, I believe that in approaching the future, the goal of policymakers should not be to eliminate traffic congestion but rather to try to strike a new balance between growth, congestion, and the political acceptability of the measures by which we can eliminate that congestion.

Archives – Summer 2002

Photo: USNC-IGY

Citizen Astronomers

When the first artificial satellites were sent into orbit around the earth in 1957, some way of tracking them was called for. As early as 1955, the Smithsonian Astrophysical Observatory (SAO), as part of its contribution to the International Geophysical Year of 1957-1958, had organized project Moonwatch, a program coordinating amateur astronomers around the world into teams for visual satellite tracking. Reports of Moonwatch volunteers’ observations of the early Sputnik, Explorer, and Vanguard satellites were sent to SAO and processed by scientists, who then determined the satellites’ orbits and also derived information about the Earth’s upper atmospheric density. Moonwatch continued to operate into the 1960s. Pictured here is a Japanese team of Moonwatch observers at their stations.

Overfishing

The environmental issues confronting the United States and the world are varied, huge, and often incredibly complicated: global warming, nutrient runoff and nonpoint source pollution, water quality and overuse, and so on. For each of these, the geographical scope is large, the constituencies are many, and the political battles are fierce. Even more problematic is that for many of these problems, we may not really know what needs to be done or how to do it. Take fisheries conservation and management. The ocean is certainly a big place, but few Americans actually make their living from it or even know the source of their seafood. Restrictions and regulations affect few people. Even in New England, there are only a few thousand fishermen. In this case, though, we do know what to do, even if determining how to do it is harder. Overfishing is a big problem; to deal with it, we need to fish less.

Michael Weber’s excellent history of U.S. fisheries policy demonstrates how difficult it has been to come to grips with even a relatively small-scale environmental problem. Weber’s style is concise, readable, and quite fast-paced for a policy review. He shows the clear line of development of the U.S. Fish Commission, the Bureau of Commercial Fisheries, and the National Marine Fisheries Service as successive government agencies with changing levels of responsibility for U.S. marine fisheries science, policy development, management, and regulation. The book digs into why each of the agencies developed and acted as they did then and now. Rather than taking the simplistic view that government policy resulted from some grand scheme or political dogma, or from the desire for centralization and power that some ascribe to the federal government, Weber shows how personalities, the politics of the time, congressional direction, and industry intervention and interest group lobbying have shaped marine resource management policy. He also describes some of the startling failures of those policies.

The book documents the development of the fisheries management agencies as a partner with the domestic fishing industry to foster its development from the late 19th century to the 1980s. From product development and hatchery rearing to vessel subsidy and support to build up the fleet, U.S. fisheries policy was designed for economic development, not environmental conservation or sustainable use. Congress set the policy, of course, but it did so in direct response to industry wishes. Thus, I have found it ironic to hear industry representatives cite government buildup of fishing fleets as the cause of overfishing, as if industry was a reluctant recipient of these subsidies. I have even heard this line from members of Congress. Weber lays bare the industry pressures that resulted in congressional action.

One of the book’s best illustrations concerns the damming of rivers in the Northwest and subsequent attempts to maintain salmon stocks through hatchery production. Those attempts have been abject failures, but that hasn’t stopped the continued pouring of funds into hatchery production. The millions of dollars poured each year into hatcheries have remained locally a very popular investment, even though the program does not address the underlying problem that dam building has resulted in dozens of salmon species becoming endangered. It is a policy developed in a vacuum: Build the dams and then worry about the fish. Build the hatcheries even though that’s an inadequate response. Then, maintain both, because to remove either a dam or a hatchery would result in some economic dislocation.

Much of the money and effort directed at fisheries problems over the years has been for science programs, which Weber cogently describes. Yet it is incredible how selectively the science has been used in actual policy development. In essence, the science was chosen to fit the politics necessary to move forward with a particular policy. Although the words of scientific management were espoused, science and policy often moved on separate tracks. Recently, I have heard calls to remove the scientific enterprise even further from policy development in the name of independence. Imagine how difficult it would be to keep science in the forefront of policy decisions if science were no longer a responsibility of the principal management agency.

Weber’s book is full of anecdotes, interviews, and quotes from those involved in fisheries over the years. This gives the book its readability and also much of its insight into the policy formulation process. Reading comments from agency managers, politicians, and industry groups shows how politics has usually won out over the need to conserve resources and adequately manage fisheries. The pressure to build fleets and push fishing limits to the edge of unsustainability and beyond has resulted in stock declines and collapses in all U.S. waters. The end result has been a short-term collapse of local industries, usually attributed to regulation but really the result of depletion of fish stocks. The long-term result is a very difficult road to recovery, if recovery is possible at all.

Weber’s book is a good place to start in developing a list of problems that need to be addressed in U.S. fisheries management. These include separating conservation decisions from allocation between user groups; defining state-federal interactions and lines of authority; strengthening the conservation mandate in the underlying statutes; and most of all, deciding what our governing principles should be for using ocean resources. Two high-level commissions are currently addressing these and other issues. The Pew Commission on Ocean Policy is a privately funded effort. The U.S. National Ocean Policy Commission, authorized in the Oceans Act of 2000, has a mandate to provide recommendations to the president and Congress on all aspects of U.S. ocean policy except national security. Hopefully, they will take some of their cues from Weber’s history.

The continuing saga of the fishing industry and its problems is best seen in the example of the New England groundfish industry, which is woven throughout the book. Here, the most detail is given about the interplay of forces that took us from abundance to scarcity. In New England, there were unrealistic expectations for prosperity after foreign fishing fleets were displaced, but what actually occurred was continual congressional micromanagement, huge industry pressures to keep regulations at bay, and finally a tragic fishery collapse. Now, however, many of the stocks are actually recovering, largely because of regulatory action. The struggle is to keep the recovery going. Yet, despite evidence that management can work, there are indications that a new cycle of inaction, lawsuits, and recrimination may be developing.

As the debate continues, the intriguing question is whether the collapse of fishing stocks and greater understanding of the causes have broadened the constituency for marine fisheries issues sufficiently to stem the potent political pressures for a return to the failed policies of the past. Weber’s book leads us to that question, and I wouldn’t mind hearing his answer in a year or so. After all, he has given us a roadmap for the past century. How about the next?

Nature and Profits

The title The New Economy of Nature seems to promise fresh insight into the economics of conservation. Regrettably, the book does not deliver. Gretchen Daily is an ecologist at Stanford University. Katherine Ellison is a Pulitzer Prize-winning journalist. They are lively and entertaining writers, but they are not economists. This fact does not prevent them from providing serviceable descriptions of some fundamental economic principles. It does, however, keep them from grappling effectively with the key issue in conservation economics: What should government do, and what should be left to the market? Because the authors don’t have a clear answer, the book fizzles to an unsatisfying conclusion. At the end, the reader is left unsure as to the point the authors meant to make–and worrying that, despite their good intentions, their work could do more harm than good in resolving the vexing problems of conservation.

Daily and Ellison accept the most orthodox of economic precepts: Scarcity implies value. This ought to be as true for what Daily in an earlier book called “nature’s services” as it is for petroleum or real estate. When natural ecosystems are in short supply, people will pay more for their services. The next logical development is that entrepreneurs will begin to make money by providing these valuable services.

If Daily and Ellison had intended simply to applaud and encourage entrepreneurs offering eco-friendly goods and services, a shorter version of the book might have succeeded in entertaining and informative fashion. It would also have duplicated a message conveyed by free-market environmentalists such as Terry Anderson and Donald Leal. Yet Daily and Ellison clearly do not share Anderson and Leal’s laissez faire philosophy. They write that “private enterprise cannot substitute for governments . . . we strongly believe that government regulation is called for to kick-start and supervise the profound economic transformation needed.”

What precisely is the role they envision for public policy, then? Daily and Ellison are never clear on what they characterize as this “great unanswered question.” One thing government can and does do is to make things artificially scarce and, therefore, valuable. By restricting the amount of pollution industry is allowed to produce, government can make tradable emission permits valuable. This will, in turn, motivate polluters to economize on emissions. Tradable permits are widely recognized as the least costly way to achieve a given environmental objective. If The New Economy of Nature convinces those in the environmental advocacy community who have still not accepted this logic to embrace market-based incentives, it will have performed a valuable service. But arguments for market-based incentives have been staples of economics textbooks and policy debates for decades. Repeating them does not seem to have been Daily and Ellison’s main purpose.

The emphasis Daily and Ellison devote to one prominent set of would-be permit traders, known as the Katoomba group, is curious. This diverse collection of academics, investors, and conservationists first assembled in Katoomba, Australia, to consider new financial incentives for conservation. One of the chief hopes of the group was that international treaties would motivate a market in carbon emissions trading. (Similar international agreements or national policies could create markets in ecological assets, such as natural habitats that provide water purification.) An agreement such as the Kyoto Protocol would make the right to emit carbon scarce and hence valuable. This would, in turn, create a demand for financial instruments such as carbon futures contracts: a contract guaranteeing its bearer the right to emit a certain quantity of carbon in the future. If an international agreement were passed limiting global carbon emissions and permitting trading in rights, such contract holders would become wealthy.

One would certainly hope that enriching such investors is not the goal of international climate policy, however. Markets in securities such as carbon futures can help ease the burden of regulation by lowering the costs of compliance and spreading risks, but the decision about whether or not to cap global carbon emissions ought to be made on the basis of the underlying costs and benefits of doing so. Daily and Ellison seem to be confusing cause and effect by hailing some Katoomba group players as “visionaries” who “offer up inventions and push governments along.”

Daily and Ellison profile others who are much more effective in pushing governments along. In many of these descriptions, though, it seems that what is being described is not a new economy so much as familiar politics. Chapters of the book are devoted to local land-use choices in Napa, California; King County, Washington; and New York State. The city of Napa had to decide how to manage the river of the same name. Daily and Ellison describe the choice as an instance of “vivid progress toward the establishment of a new economy of Nature, in which the labor of ecosystems is formally respected.” The “labor of ecosystems” is a revealing phrase. Who pays its wage? One could not object if a majority of voters decided to bear the costs of conservation. The citizens of Napa did, in fact, tax themselves for part of the cost of restoring their river. But Daily and Ellison also write that “it looked as if Napa residents would pay much less than half of the total bill,” with the remainder of funds coming from a variety of public programs. Was it appropriate for taxpayers elsewhere in California and the nation to have footed part of the bill, or did backroom deals stick them with subsidizing the citizens of Napa? This is the type of question one would have liked to see Daily and Ellison address more directly.

Skeptical view

The authors should be commended for producing a far more skeptical and even-handed review of New York City’s celebrated Catskill watershed restoration program than have many other authors. New York City began an extensive program of land acquisition and land-use restriction in the watershed serving its Catskills reservoir in order to avoid a U.S. Environmental Protection Agency requirement to build an expensive filtration plant. Some have claimed that the Catskill program saved the city billions while affording fair treatment to all affected landowners. The New Economy of Nature entertains doubts on both assertions. The city believed that the natural landscape could provide sufficient filtration services if it were restored. Although Daily and Ellison agree that this is the most likely outcome, they also note that it is not a sure thing either that the natural system will suffice in the long run or that the cost advantages of the natural approach will always prevail. As for the notion that the program was implemented entirely by voluntary transactions between landowners and city authorities, the reaction the authors ascribe to one local businessman is trenchant: The restrictions imposed on him were in his words “an absolute outrage, a thievery.”

Such careful reporting is the strongest point of the book. Even those who have followed developments in conservation-related markets fairly closely in recent years are likely to learn interesting things from Daily and Ellison. The book is longer on reporting than analysis, though, and the New York City watershed section reemphasizes its frustrating ambiguity. Is this example really an instance of what the book’s subtitle calls the quest to make conservation profitable? So long as we don’t know if those who are receiving the benefits are paying the costs, we can’t tell.

Such issues are nowhere more problematic than in an example offered in the book’s final chapter. Does the new economy of nature encompass dubious tax deductions? Allegheny Power, an electric utility, owned land in the Canaan Valley of West Virginia. With the assistance of Adam Davis, an alumnus of the Katoomba Group, Allegheny had the land appraised for the value of its ecological assets. When the land was sold, Allegheny claimed a $15-million tax write-off, arguing that this was the unrealized value of services such as carbon sequestration and habitat preservation that the land provides. If there really is a market in which the company could sell the land for the price it claims, is it failing in its fiduciary responsibility to shareholders by not holding out for the full value? If there is no such market, should the Internal Revenue Service disallow the tax deduction?

Other chapters detail the efforts of entrepreneurs to make conservation profitable. Daily and Ellison report the efforts of biologist Dan Janzen to use parks in Costa Rica to provide waste recycling and other services. Another chapter follows John Wamsley, a colorful character whose Earth Sanctuaries Limited nature reserves and ecotourism destinations became the first eco-enterprise listed on a major stock exchange. In a third chapter, Daily and Ellison describe how farmers can earn higher profits in the long run by following more ecologically benign practices. Such reports beg questions. The authors are reporting what farmers, wildlife managers, and park authorities have already done to earn more money. Are they suggesting that others could benefit from emulating them? If so, is a book such as The New Economy of Nature sufficient to apprise them of the possibilities, or should government be spreading the word more aggressively? If not, have farmers and park managers largely solved the conservation problem on their own? Again, the reader is left wondering what role Daily and Ellison would have public policy play.

This lack of guidance is especially troublesome, as it is not clear that all the profit-seeking activities the authors report are, in fact, ecologically benign. Consider, for example, William Harper, inventor of a system to deploy genetically modified “designer pollinators” over agricultural fields using a mortar. Daily and Ellison remark simply that his ideas “highlight a growing sense of financial potential in the overall beneficial-bug industry.” Others might question the ecological wisdom of disbursing genetically modified insects widely. In other instances, such as in their discussion of the potential adverse ecological effects of ecotourism, Daily and Ellison do raise similar doubts. However, they never address head-on the pivotal question of whether the appropriate public policy is to take an entirely hands-off attitude, subsidize such ventures, restrain their excesses, or create some combination of the last two.

The New Economy of Nature is engagingly written and thoroughly researched, but ultimately frustrating. It never makes its intentions clear. What exactly is the role Daily and Ellison propose for public policy? The book frequently confuses what can be accomplished in markets with political choices. A more constructive approach would recognize appropriate realms for both economics and politics. When conservation is profitable, markets should not be constrained. Quantitative regulations can be most efficiently implemented with market-based incentives, such as tradable permits. In some instances, however, economics may not provide an answer as to how many permits should be issued. The benefits of environmental improvement may be so uncertain and diffuse as to prevent accurate measurement. In these circumstances, society has to make political choices. Such choices should be made transparently, with the participation of an informed electorate that appreciates the inherent ambiguities. The New Economy of Nature presents some interesting and intriguing examples, but in the final analysis it delivers a garbled message by confusing political and economic considerations.

Countering Terrorism in Transportation

Americans now face almost weekly warnings about potential terrorist targets, from banks and apartment buildings to dams and nuclear power plants. This threat of terrorism is not new to transportation. From jet airliners to mass transit buses and rail terminals, vehicles and transport facilities are all-too-familiar targets of terrorist attacks in this country and abroad. Although new attacks could occur, we cannot simply accept a recurrence as inevitable. Terrorist attacks on the transportation system can be derailed, and they can also be deterred.

Successful transportation counterterrorism, however, will require a new strategy. There is no point in trying to protect against or weed out every possible opening for terrorists. That is a traditional approach to transportation security, but it is expensive and demonstrably ineffective. The new strategy should rely instead on layering and interleaving various defensive measures. With layering, each safeguard, even though it may be inadequate by itself, reinforces the others. A layering strategy will not only protect against vulnerabilities in transportation security, it will also deter terrorists by creating uncertainties about the chances of being caught. Developing a better transportation security strategy should be the job of the newly created federal Transportation Security Administration (TSA). We present some recommendations for how the TSA should proceed.

Transport vehicles are ubiquitous, moving virtually unnoticed within industrial locations and major population centers; across borders; and (in the case of mail and express package services) to nearly every household, business, and government office in the country. This is how our transportation system works and must continue to work.

Using four jet airliners as cruise missiles, however, the September 11th attackers showed how the omnipresent air transportation system could be turned into a weapon far deadlier than ever envisioned by those charged with aviation security. Only a few weeks later, the mailer of anthrax spores, capitalizing on the anonymity and reach of the postal system, showed how a seemingly innocuous transportation mode could be turned into a weapons delivery system.

The question now facing the federal government and the many state, local, and private entities that own and operate the nation’s air, land, and water transportation systems is how best to secure these systems to keep them from being exploited again to such tragic effect.

The solutions are not obvious. The very nature of the transportation enterprise is to be open, efficient, and accessible. Security that restricts access and impedes transportation can send costly ripple effects throughout the national economy and society. Moreover, the vast scale and scope of the transportation system means that efforts to protect its many potential vulnerabilities through traditional means–guards, guns, and gates–may do little more than disperse and dilute the nation’s security resources. Determined attackers can find ways to defeat such single-tiered perimeter defenses; that is the lesson from the Trojan horse’s penetration of the walls of Troy as well as from modern-day experience with computer hackers.

We contend that setting out to eliminate or defend every vulnerability in the transportation system, one by one, is the wrong approach to countering terrorism. Attempts are likely to prove futile and costly. How, for instance, does one go about even identifying, much less defending, every possible target in the 300,000-mile rail and 45,000-mile interstate highway networks? Even civil aviation comprises some 14,000 airports and 200,000 airplanes scattered across the country. Deploying one protection at a time, matched to a specific vulnerability, is likely to yield little more than a thinned-out patchwork of unconnected defenses. We now know that an attacker who can find a way to breach a single imperfect protection, as the September 11th hijackers did in defeating the airport screeners, can overcome an entire security regime. Little else is in the way.

A more sensible approach is to layer security measures so that each element, however imperfect, provides backup and redundancy to another. No single protection is likely to be foolproof or impermeable. When protections are layered, if one fails, others can compensate. A layered system, therefore, does not require a sustained high-level performance from each protection, just a reasonable expectation of success. Long used to secure communications and information systems, the collective layering of protections provides deep, dynamic defenses.

This is not to say that key facilities should be left unguarded. We must identify those elements of the transportation system that require a dedication of security resources because their destruction or impairment would cause considerable harm to people and to the transportation system. Such facilities–key bridges, underwater tunnels, command-and-control centers, and the like–should be guarded, but as part of a layered system of defenses. The hardened cockpit door is not the only defense against an armed hijacker; it is the last in a series of layered defenses.

Most important, the interleaved layers can do more than protect. They can also deter, confounding the would-be attacker by making it difficult to estimate the chances of successfully breaching the many protections. The attacker might be able to seek ways of increasing the odds of defeating a single-tier protection, but calculating and overcoming the odds against beating one tier after another is next to impossible. Terrorists do not like the uncertainty that layering creates.

Given the vast scale and openness of the transportation system, security measures that deter are vital because blanket protections are not feasible. Also important are security measures designed to integrate well with the functions and services of transportation systems. The more that security can provide side benefits, from improved transportation safety to reduced cargo theft or luggage loss, the more it is likely to be maintained and improved by users and operators. We have learned from experience that a regulation-based system without such user incentives is more likely to be treated by industry as another set of rules to be complied with, often in as minimal a fashion as possible.

After the September 11 attacks, Congress passed the Aviation and Transportation Security Act, which created the TSA and mandated a number of actions to improve air transportation security, from federalizing airport passenger screeners to deploying air marshals and luggage explosive detectors at airports. The TSA’s creation was long overdue, providing the potential institutional and analytic means to build coherent layered security systems in all the nation’s transportation modes. Currently, however, the TSA is consumed with meeting statutory requirements to purchase and install costly, burdensome, and perhaps inadequate explosives detection systems at all commercial airports by the end of 2002. The aim of these detectors is to keep bombs out of checked luggage. Yet a bomb set to explode in a suitcase is only one threat to air transportation; it is essential that efforts to prevent such an occurrence do not consume a disproportionate share of the TSA’s attention and resources.

Indeed, there is a real risk that the many security requirements hurriedly mandated by Congress could yield yet another round of ad hoc measures to secure air transportation outside an overall systems context. There is also a risk that the TSA, compelled to implement these and other aviation security requirements, will become primarily reactive in its approach to security. Responding to one legislated requirement after another, the TSA could evolve into a rule-guided enforcement agency that has neither the capacity nor the incentive to develop the overall strategic responses that are necessary to ensure security in all of the transportation modes.

The federal government has a major role in regulating and overseeing air transportation. It does not have as pervasive a role in land and water transportation systems, so neither Congress nor the TSA can take on such a hands-on approach to security there. To be effective, the approach must be strategic, and the TSA must take the lead.

Why the piecemeal approach failed

The incident-driven piecemeal approach to security has long been characteristic of commercial aviation, but it failed tragically on September 11. In response to the 1988 Pan Am suitcase bombing over Scotland, and especially the 1996 TWA explosion off Long Island, the major airlines and federal government tried to better employ their limited resources to find bombs in checked luggage with the aid of a computer-assisted passenger prescreening system known as CAPPS. Travelers with certain markers in their reservation record, such as a one-way journey or payment by cash, were singled out by CAPPS and their checked luggage scrutinized carefully for explosives. Yet these same passengers, deemed by the CAPPS algorithms to be higher risk than other passengers, were not screened more carefully at passenger checkpoints or gate check-ins. Although the September 11 hijackers had some of these risk markers, CAPPS was not used to identify them for increased scrutiny at checkpoints and before boarding. CAPPS was deployed to find bombs hidden in suitcases, not to prevent hijackings. It was a particular countermeasure deployed to address a particular vulnerability.

Indeed, ever since a rash of handgun-enabled hijackings in the 1960s, the airlines and federal security authorities have taken a reactive and piecemeal approach to aviation security. To intercept handguns used in hijackings, metal detectors and x-ray scanners were installed at passenger checkpoints. From 1994 to 2000, more than 14,000 firearms were detected and confiscated by these airport screeners. The metal detectors and scanners did more than intercept guns, however. They also discouraged the use of guns for hijacking in the first place. Note that the September 11 hijackers did not use firearms. Presumably they feared getting caught by the screeners. However, although the number of firearms intercepted was tracked routinely, the deterrent effects of passenger screening were rarely, if ever, evaluated. If the screeners had been viewed as more than just a means of detecting and intercepting guns–indeed, as part of a total security package that both preempts and inhibits attacks–then the full value and potential usefulness of the screeners would have been better recognized and rewarded.

The best way to strengthen security is to build it into the systems by which transportation is operated and managed.

Understanding what deters terrorists is crucial for designing effective and efficient security systems, especially in the spread-out and heavily used transportation system. If you can’t physically protect or eliminate every vulnerability, then it is important that you find ways to deter the act in the first place. Doing so will require a fair amount of creativity and innovation in security methods. This means employing tactics such as randomizing security screening, routinely setting traps, clandestine policing, and masking detection capabilities, that effectively create layers of uncertainty and inhibit terrorist activity through what have been called “curtains of mystery.”

Today there are more interleaved curtains of mystery in place that are helping to secure air transportation. Terrorists now must wonder whether a more thorough inspection at a checkpoint will uncover their plot, or if an air marshal could be on board the aircraft, ready to intervene at the final stage. Moreover, flight crews and passengers themselves, more vigilant and observant than ever, are more likely to detect and suspect unusual patterns of behavior. Even the random inspection of passengers at gates before boarding–procedures criticized by some as nothing more than a means to avoid unpalatable passenger profiling–may, in fact, be a deterrent. Because purely random inspections cannot be avoided except by chance, they provide added uncertainty about the odds of getting caught. Taken together, there are many more potential and hard-to-gauge obstacles to transportation terrorism today than there were one year ago.

The curtains of mystery are there now, but not, it appears, as a deliberate strategy. The curtains need to be placed purposefully, and they need to be based on an understanding of what works to deter as well as to protect. An example of what not to do was announcing to the public that air marshals would be present on certain kinds of airplanes and not others, as Congress did in instructing the TSA to give priority to deploying marshals on nonstop, long-distance flights. Such an announcement may be counterproductive to the entire effort to prevent terrorism. Warned off one target because of uncertainty, the terrorist may very well seek another. To prevent such deflection, it is vital that deterrence strategies be thought through carefully and be well placed to protect potential targets that would be most damaging.

It is, of course, important to mix creative deterrence with creative means of intervention. Since September 11, there has been much discussion about “trusted traveler” programs. The idea is that air travelers would confirm their identities through biometric means and volunteer personal information to aviation security authorities in exchange for faster passage through checkpoints. There is a common misperception, however, that this volunteered information would be used primarily to conduct background checks on passengers and thus perhaps prompt the repeated singling out of certain groups of passengers for extra security processing that is burdensome and potentially demeaning. Another possible use for such data, however, is at the more aggregate level to cross-match the characteristics of all travelers on individual flights, or even across a series of flights scheduled at similar times. For instance, a review of passenger manifests, coupled with other information volunteered by travelers and obtained elsewhere from airline databases, credit bureaus, and public records might reveal that several passengers seated separately on the same flight and with different planned itineraries once shared the same address, traveled together on previous flights, or paid for items using the same credit card. This circumstance might be considered unusual enough to merit closer scrutiny. At a minimum, good data on the characteristics of passenger traffic are crucial for understanding what is normal and thus what is abnormal and possibly suspect. Of course, the many ways in which such data can be used for security purposes can itself create a level of uncertainty that deters terrorists from targeting airlines.

Collaborative security

What is sure to be important in devising security strategies for each mode of transportation is an understanding of the operations and characteristics of the transportation systems themselves. Strategies and tactics developed for one mode of transportation that are modified and applied to another may yield little, if any, benefit. The inspection and screening methods used for airline passengers and baggage in controlled settings, for instance, are ill suited to other kinds of transportation that require more open and convenient access.

The importance of understanding the characteristics of transportation systems and the varied security opportunities they present is illustrated by a new concept being considered for securing marine shipping containers. Currently, only about 2 percent of containers arriving at U.S. ports are subject to inspection by the U.S. Customs Service. Most ports are in urban locations, which are not desirable places to intercept a weapon, especially a weapon of mass destruction. Another security option is to subject shipping containers to security checks and inspections at much earlier stages, starting from when they are loaded. Indeed, a few large transshipment ports that act as hubs, such as Long Beach, Rotterdam, Newark-Elizabeth, and Hong Kong, offer potential points of leverage for designing a security system that encourages shippers to load containers in secured facilities and take other steps to ensure container security during the logistics stream.

Because these large ports are so critical to the container shipping industry, such requirements could soon become the de facto industry standard. Shippers that choose not to comply may be denied access to the port or be subject to greater scrutiny and resultant delays, reducing their ability to compete. In fashioning such a layered system, the prospect for an illicit container being intercepted before reaching the United States (and, thus, the chances of the act being deterred in the first place) are likely to be greater than under the current system of infrequent container inspections at the end of their journey, on U.S. land. Of course, other countries, also eager to keep terrorists off their soil, will demand the same treatment for cargoes leaving the United States.

What this example demonstrates is that transportation security will have to be undertaken collaboratively. It must involve not only government security and enforcement agencies, but also the public and private entities that operate, own, and use transportation systems in this country and abroad. The more security measures promise to provide collateral benefits and utility to all the parties, the more likely the systems are to be maintained and improved. For instance, if a security system for shipping can help reduce theft and loss of cargoes, prevent the use of containers for shipping drugs and other contraband, and help carriers and shippers keep track of shipments, it has a better chance of being accepted and sustained. In the same vein, if luggage inspection and security control systems at airports can reduce the incidence of lost bags, both airlines and their passengers may find the added costs and inconveniences worthwhile. Both the role played by the Federal Aviation Administration’s air traffic controllers in grounding aircraft just after the September 11 attacks and the forensic uses made of tracking codes imprinted on U.S. mail in investigating the anthrax mailings demonstrate that such dual-use opportunities exist and can be integrated into security planning.

Those who have worked to improve quality in U.S. manufacturing are known to repeat the mantra that “you cannot inspect quality into a product.” The same observation has been made for safety, and it applies equally to security. The way to strengthen security is to build it into the systems by which transportation is operated and managed, just as we have done to ensure quality and safety.

The TSA’s strategic role

Building layered and well-integrated security systems into all transportation modes will not be easy. It will require an ability and willingness to step back and define security goals; to identify the layered and dual-use security concepts best suited to meeting them; and to work with many public, private, and foreign entities to implement the most promising ones. Security planners must be willing to question many existing security rules, institutional relationships, tactics, and technologies. And the planners themselves must be supported by sound systems-level research and analysis.

That is why work to devise and deploy such coherent systems must get under way now. What the tragic security failures of September 11 reveal is that the continual piecemeal imposition of new technologies, rules, and processes can compromise security and erode public confidence in the government’s ability to ensure it. Federal policymakers seeking to regain public confidence in aviation security did not have a coherent system in place that could be fixed by filling identifiable gaps. Rather, the structure that was in place was fragmented and irreparable, prompting Congress to take the many dramatic, rushed, and ad hoc measures that it did. Unfortunately, further attacks may make it even more difficult to devise sound security systems, leading to more erosion of public confidence and an even greater inclination to react reflexively through piecemeal means.

The TSA must take on a strategic role in developing coherent security systems for all transport modes.

Newly organized and compelled to act quickly on the congressional requirements for aviation security, the TSA is just beginning to examine the security needs of all transport modes and to define its role in meeting these needs. The TSA must be more than an enforcement agency. It must take on a strategic role in developing coherent security systems for all kinds of transportation. We urge the TSA to:

Take the lead in designing transportation security systems through collaboration. There are many public, private, and foreign entities that ultimately must field the systems that will make transportation more secure. Their decentralization and dispersion, however, hinder cooperation in devising and deploying system-level concepts. The TSA is well positioned to orchestrate such cooperation, which is essential for building security into transportation operations, as exemplified by the large port concept for securing marine shipping containers.

As the TSA works with transportation system owners, operators, and users in exploring alternative security concepts, it will become more sensitive to implementation issues, from economic to societal challenges. The prospects for deploying many new technologies and processes in support of security systems will likely raise some difficult societal issues. For instance, a more comprehensive and integrated CAPPS initiative for prescreening airline passengers may require the use of biometric cards and access to personal data to better identify passengers and their risk characteristics, presenting not only technical challenges but also raising concerns over legality, privacy, and civil liberties.

There are also issues of liability and risk. Industry participants in a linked system of security will want assurance that they are not assuming greater risk of liability if the security system fails, and that any proprietary information that is used will remain protected. Some of these legal and institutional issues will constrain or even preclude implementation, whereas others will not. Either way, they must be appreciated early on, before significant resources are invested in concepts that may prove to be unacceptable.

Conduct and marshal R&D in support of systems analysis. Thinking of security in a systems context will reveal many research and technology needs. One area of research that is likely to emerge as critical is an understanding of human behavior and performance. Human-factors expertise and knowledge will be necessary for crafting layered security systems that as a whole obscure the ways in which one might be caught–confounding the terrorist–and maximize the ability of security personnel to recognize unusual and suspect activity and behavior. Moreover, they are essential for designing security devices, facilities, and procedures that are efficient and reliable and that complement the skills of human operators and security personnel.

In support of such systems and human-factors research, the TSA must have both its own research capacity and the ability to tap expertise from within and outside the transportation community. In viewing R&D activities from a systems perspective, the TSA can determine where additional R&D investments can yield large benefits, and it can orchestrate ways to encourage such investments. To be sure, much necessary research and technology development must take place outside the transportation realm, in the nation’s universities and research institutions and with support from much larger R&D sponsors such as the Department of Defense, National Institutes of Health, and National Science Foundation. However, by making the needs and parameters of transportation security systems more widely known, the TSA can help identify and shape research and technologies that are promising and relevant for transportation applications.

Provide a technology guidance, clearinghouse, and evaluation capacity. At the moment, both public and private sectors are interested in developing and employing technologies for transportation security. Many public and private researchers, for instance, are trying to develop sensors that can detect and alert transportation security personnel to the presence of chemicals and explosives. But how does one go about designing sensors that can detect chemicals in a busy transportation setting with myriad background materials? And how does one deploy and network such sensors so that they provide both a useful level of sensitivity and an acceptable rate of false alarms–alarms that can wreak havoc on transportation operations and that might ultimately be ignored?

Clearly there is potential for much effort to be expended on developing technologies that are not suited to transportation settings or that are incompatible with overarching security systems. Thus, as it proceeds in identifying appropriate security systems for each sort of transportation, the TSA should be prepared to offer guidance to commercial developers on appropriate technological capabilities. By articulating these performance needs and parameters, the TSA will provide technology developers with a clearer target for their R&D efforts. It will also provide transportation system owners and operators with a better sense of which technologies and processes will work, where they can offer dual-use benefits, and where opportunities may exist to collaborate with researchers and technology developers.

Unconventional thinking on threats

September 11 demonstrated that terrorists are able to appropriate transportation systems and assets in ways that can be difficult to conceive of and so are overlooked in day-to-day efforts to ensure transportation security. The advent of the TSA should be helpful in heightening the transportation community’s attention to security, but perhaps not in overcoming the tendency to view transportation assets and operations within functional domains and securing them accordingly. The size, scope, and ubiquity of the transportation sector, coupled with its myriad owners, operators, and users, generate many opportunities for terrorists to exploit it in novel ways that may not be anticipated by those traditionally responsible for transportation security. By and large, transportation systems are regulated at the mode-specific level, and the entities that own and use them are organized for the efficient provision of specific services. Terrorists, however, are actively seeking to exploit new forms of threat that are outside such conventional perceptions of order. Terrorists may not view individual transportation assets, infrastructure, and services in such self-contained and functionally oriented ways, but rather as components and tools of other systems, as they used jet airliners and letters as weapons last fall.

We need a broader-based understanding of terrorist threats that involve transportation and how to respond to these threats. A national entity outside normal organizational settings whose sole mission is to explore and systematically assess terrorist threats, probable responses, and ensuing consequences could go a long way toward meeting this critical need. In a nationally televised address on June 6, President Bush proposed the creation of a cabinet-level Department of Homeland Security that would, among other things, gather intelligence and law enforcement information from all agencies and charge analysts with “imagining the worst and planning to counter it.” The need for such systematic analysis has likewise arisen in discussions of the National Academies’ Committee for Science and Technology to Counter Terrorism. We believe that such a dedicated analytic capability is critically important. It should offer a window into the mind and methods of the terrorist. It is also a prerequisite for keeping our transportation systems from being exploited again so tragically.

The Henry and Bryna David Endowment

The article by Elizabeth Loftus that begins on the following page is the first annual Henry and Bryna David Article/Lecture. It was presented in person at the National Academy of Sciences on May 7, 2002, to an invited audience. Financial support for the activity comes from a generous bequest from the Davids’ estate to support joint activities by Issues in Science and Technology and the Division on Behavioral and Social Sciences and Education (DBASSE) of the National Research Council.

Henry David was a scholar with a lifelong commitment to the advancement of the social sciences and their contribution to public policy that he demonstrated in his numerous leadership roles, including professor at the Lyndon B. Johnson School of Public Affairs at the University of Texas, executive director of the Assembly of Behavioral and Social Sciences at the National Academy of Sciences/National Research Council, Pitt Professor of American History and Institutions at Cambridge University, president of the New School for Social Research, dean of the graduate faculty of political and social sciences at Columbia University, and executive director of the National Manpower Council.

Bryna David was also active in public policy, working as an assistant to Eleanor Roosevelt during the 1948 UN General Assembly in Paris, as a scholar in residence at the Rockefeller Center in Bellagio, Italy, and as director of the National Manpower Council.

We at Issues are very pleased with the opportunity to work closely with DBASSE. Although much of what we publish is built on the foundation of the physical and life sciences, the application of this information to public policy is essentially a social science. The work that DBASSE does in individual disciplines such as economics and psychology and in crossdisciplinary areas such as child development, education, and human performance is of critical importance to a wide range of science and technology policy concerns. The David Endowment will enable Issues to tap DBASSE expertise and improve the quality of the magazine.

DBASSE staff and board members worked with Issues editors in selecting Elizabeth Loftus to prepare the first article/lecture, and they will continue to play that role in the future. In addition, endowment funds will be used to support an annual David Fellow from the DBASSE staff to work with Issues each year. The David Fellow, who will be selected on the basis of a proposed work plan, will contribute to Issues in a variety of ways–writing feature articles or book reviews, preparing a Real Numbers section, helping other DBASSE staff or committee members prepare articles, or identifying promising topics and authors for articles. The David Endowment will capitalize on the synergy between the two activities and will significantly enhance the visibility and influence of social and behavioral science expertise in public policy formation.

The National Academies and the University of Texas at Dallas are extremely grateful for this generous bequest from the Davids and for the help of the estate trustees, Alex Clark and Philip Hemily, who helped us design activities that will enhance the work that we do as well as carry on the commitment of the Davids to use the insights of the social and behavioral sciences to inform public policy.

Forum – Summer 2002

Electric system security

“Bolstering the Security of the Electric Power System” by Alexander E. Farrell, Lester B. Lave, and Granger Morgan (Issues, Spring 2002) addresses a pressing concern. Electrical power plants are one of many areas that might be subject to a terrorist attack. Here in Congress we are passing legislation that increases safety precautions and protection against possible terrorism. Plants that generate the nation’s power supply are an important consideration. The Patriot Act and the Enhanced Border Security Act, as well as increased appropriations for Homeland Security, are significant steps we have taken to help ensure the well-being of the American people and their property against terrorist attack.

Although it is impossible to anticipate every potential danger to our society, we can greatly reduce the chances of future attacks by remaining diligent in our war on terror. We must not become complacent because of our current success in preventing some attacks, but must press on with our mission to eradicate terrorist cells in this country and elsewhere.

We are also making strides to protect technology sectors from potential terror attacks. I am a cosponsor of the Cyber Security and Research and Development Act that went through my Science Subcommittee, which aims to establish new programs and provide more funding for computer and network security R&D. It also provides research fellowships through the National Science Foundation, the National Institute on Standards and Technology, and the Commerce Department.

The effect on the American consumer of guarding against terrorism remains to be seen, but I expect it to encompass all facets of commercial society. The cost of greater security for electric generation will eventually increase not just the cost of power for home consumers but the cost of everything produced.

REP. NICK SMITH

Republican of Michigan


“Bolstering the Security of the Electric Power System” thoughtfully examines the challenges in improving the ability of the nation’s electric power system to withstand terrorist attacks. They make a persuasive case for emphasizing survivability over invulnerability as a strategy to protect the electric power system as a whole.

The challenge of ensuring adequate security should be viewed in a broader context than the authors have acknowledged, however. We have limited funds as a society to spend on security, and these assets should be allocated rationally to defend infrastructure of all kinds. The aim should be to develop an integrated and balanced national strategy to protect all sectors of our society, not just the electrical system. That strategy should define an appropriate level of protection and establish the boundaries of the responsibilities of the private sector and local, state, and federal governments. The article, with its focus on the electrical system, does not recognize some of the broader and more fundamental questions that must be addressed.

The authors seem to suggest that a straightforward solution to security concerns is to eliminate “high-hazard facilities,” apparently including dams and nuclear plants. The evaluation of such an approach should appropriately include consideration of the economic, environmental, and other costs. Those placing a higher value on the reduction of greenhouse gases, for example, might not see the authors’ solution as quite so straightforward. And, when viewed in the broader societal context, the authors’ suggestion would presumably imply the elimination of other high-hazard facilities posing similar risks, such as many chemical facilities, petroleum refineries, and the like. This approach has widespread implications for our economy and lifestyle that the authors do not examine.

The authors also assert that adequate institutions for the protection of nuclear power plants have yet to be developed. I believe that the Nuclear Regulatory Commission’s (NRC’s) regulatory oversight processes are adequate to ensure that nuclear plant licensees establish and maintain acceptable levels of site security. Moreover, the authors somewhat mischaracterize the state of nuclear plant security. The Operational Safeguards Response Evaluations conducted by the NRC are force-on-force exercises that are explicitly designed to probe for weaknesses in plant security when the attacking force has complete knowledge of the site defensive strategy. When weaknesses are exposed, NRC licensees are required to take appropriate steps to correct them. The authors’ claim of poor performance thus reflects a common misunderstanding of the purpose and difficulty of these exercises.

RICHARD A. MESERVE

Chairman

U.S. Nuclear Regulatory Commission

Washington, D.C.


Pentagon management

I agree with many of Ivan Eland’s suggestions about running Pentagon acquisition like a business (“Can the Pentagon Be Run Like a Business?,” Issues, Spring 2002). So have numerous Pentagon leaders. For at least the past 30 years, every single secretary of Defense has sought to reform defense acquisition. Why, then, does the Pentagon still not run like a business?

Having observed and participated in defense business for the past 30 years, I offer a one-word answer: mission. The Department of Defense’s (DOD’s) mission is to deter wars and if necessary fight and win them. Efficiency does not appear in this mission statement. As a result, commanders and managers focus on buying the best equipment and making it run well within whatever budget they can justify. That does not mean that these commanders and managers do not care about efficiency. Most are capable, public-spirited individuals who want to give the taxpayers as much defense as possible for their dollars. However, because DOD’s mission does not require efficiency, it inevitably becomes a secondary priority in the crowded schedule of defense leaders. We are currently witnessing this phenomenon in action as senior managers focus intently on the war on terrorism.

Mission is the most important but not the only roadblock to running the Pentagon like a business. There are few incentives to save, because reducing costs often leads to smaller future budgets. Pressure to spend all of one’s budget, in order to establish a base for larger future budgets, also hinders efficiency. Finally, Congress and politics slow or halt efficiencies, though I believe Eland attributes too much of the problem to this factor.

If we want a more businesslike defense establishment, we would have to include efficiency as part of its mission and measure commanders and managers based on their success in business as well as in war. But I would not recommend this approach. DOD’s single-minded focus on winning wars has served our country well many times during its history, including now.

Even if we accept a system that leaves some roadblocks in place, we should not stop trying to create a more businesslike Defense Department. In an organization the size of DOD, even small efficiencies can yield large dividends for the taxpayer. Eland’s various suggestions deserve careful consideration (or, more often, reconsideration). For example, we should follow Eland’s suggestions and strive for more use of commercial specifications and continue to work to engender competition. In a few cases, however, Eland does not acknowledge the disadvantages of his proposals. It is not clear, for example, that accepting monopolies by ending teaming relationships will hold down costs over the long run.

Whatever we do, we should remember that the Pentagon’s mission does not include efficiency. Our motto should be: Keep trying, but be realistic.

ROBERT HALE

Logistics Management Institute

Washington, D.C.

Hale has served as the Air Force comptroller and also directed the National Security Division of the Congressional Budget Office.


Ivan Eland has reported one of my recommendations for improving incentives for defense contractors. Although I do indeed recommend that defense R&D contracts be allowed to be more profitable, the emphasis should not be “upon completion” as Eland has written. Instead, the emphasis should be on allowing profits for doing defense R&D, period, whether upon completion or not.

As Eland notes, too often defense contractors take short cuts in R&D to get into production, because production is profitable and R&D is not. The result is that problems that could have been solved earlier in development can plague the system–and U.S. troops overseas–for decades. This adds to life-cycle costs for logistics, maintenance, repair, and workarounds. It also can leave our troops in the field with poorer capabilities than they and American taxpayers had been led to expect. In recent years, 80 percent of new U.S. Army systems did not achieve 50 percent of their required reliability in realistic operational testing. The Army has been working to correct this situation, but it is a direct result of the reversed incentives in defense contracting.

The Navy and the Air Force have had their own difficulties. In recent years, 66 percent of Air Force systems had to halt operational testing because they were not ready. In 1992, only 58 percent of Navy systems undergoing operational testing were successful.

Defense industry responds to incentives, and if the incentives reward the development of good reliable equipment for our military, industry will respond. If, on the other hand, the incentives are to get into production as soon as possible, U.S. troops can end up with unreliable or even dangerous equipment. The V-22 Osprey with its record of poor reliability and fatal crashes is a case in point.

When commercial industry produces something that doesn’t work, or has to be recalled, the consumer simply stops buying it. The company can fail and go out of business.

But in defense contracting, the government is the customer and often isn’t willing to stop buying a poor product or let the company fail. Too many jobs and other constituent interests are at stake, and the “can-do” attitude that we admire in our military usually takes over to work around the difficulties. The cynical expression, “close enough for government work” has its origins in this situation.

So unless the politics in defense contracting can be changed to be the same as in commercial contracting, and I doubt it can, the Pentagon simply cannot be run like a commercial business. What we can do is work to improve the efficiency of Pentagon business processes and to reduce costs, and the military services try to do this every day.

In the long run, it will be more effective to change the incentives for the defense industry so that its well-being is tied more to the quality of its defense products than to the quantities produced.

PHILIP E. COYLE

Los Angeles, California

Coyle was assistant secretary of Defense and director of Operational Test and Evaluation from 1994 to 2001.


What broadband needs

As Adam D. Thierer points out (“Solving the Broadband Paradox,” Issues, Spring 2002), “the public has not yet caught broadband fever.” This should not be surprising. The rates of adoption of dial Internet access, as well as the utilization patterns of data networks, proved many years ago that the “insatiable demand for bandwidth” was a myth. New products and services take time to diffuse widely. Today, when offered the choice, most people vote with their pocketbooks for extremely narrowband wireless phones over comparably priced DSL or cable modem links.

Although mobility currently trumps broadband in the market, that may not persist for ever. Adoption rates for broadband, although disappointing by the expectations of Internet time, are high, higher than those of cell phones at a comparable stage in the development of the wireless industry. The question is whether we should strive to increase these rates, and if yes, how to do it.

Thierer dismisses “spending initiatives or subsidization efforts” as “unlikely to stimulate much broadband deployment.” That is surely incorrect. As the example of South Korea (with over 50 percent broadband penetration) shows, lower prices can do wonders for demand, and some “spending initiatives or subsidization efforts,” if well targeted, might lower prices in the United States. However, Thierer is probably right that it would be unwise to make giant investments of public money in this area, where technology and markets are changing very rapidly.

Thierer’s main prescription is to deregulate the Baby Bells. In the interests of brevity, I will not discuss the reasons why I feel this would have perverse effects. Instead, let me suggest three other methods for stimulating broadband: one intriguing but totally impractical, one very practical but incremental, and one speculative.

The impractical method for stimulating broadband adoption is to make music free on the Internet. As Thierer notes, Napster and its cognates have been among the main reasons people buy broadband connectivity. Instead of using the law to choke file swapping, perhaps we should encourage the telecom industry to buy off the music studios. Total recorded music sales in the United States come to a grand total of about $15 billion per year, whereas telecom spending is over 20 times higher. Thus, in the abstract, it might be a wise investment for the phone companies to buy out the studios. This is of course wildly impractical for business and legal reasons, but it would quickly stimulate demand for broadband. (It would also demonstrate that the content tail should not be wagging the telecom dog, as it too often does in political, legal, and business discussions.)

A more practical method for stimulating broadband is to encourage migration of voice calls to cell phones (which currently carry well under 20 percent of total voice traffic). This would force the Baby Bells to utilize the competitive advantage of wired links by pushing broadband connectivity. This migration could be speeded up by forcing the Baby Bells to spin off their wireless subsidiaries, and by making more spectrum available for cell phones.

The third technique for stimulating broadband is to encourage innovative new wireless technologies, such as those using the unlicensed bands (as in 801.11b, aka WiFi) and Ultra Wide Band. The technical and economic feasibility of these technologies for providing connectivity on a large scale is unproven as yet. However, if they do work, they might offer a new mode of operation, with most of the infrastructure owned and operated by users.

ANDREW ODLYZKO

Digital Technology Center

University of Minnesota

Minneapolis, Minnesota


I commend Adam D. Thierer for his illuminating article. The debate about how the United States carries out its most ambitious national infrastructure build-out since the interstate highway system is deeply complex. Fortunately, Thierer makes a clear, compelling case for just how much is on the line for the nation.

I couldn’t agree more with his central thesis that “FCC regulations are stuck in a regulatory time warp that lags behind current market realities by several decades . . . and betray the cardinal tenet of U.S. jurisprudence that everyone deserves equal treatment under the law.”

His explanation of the “radically different regulatory paradigms” that govern competing broadband platforms correctly notes that cable and wireless high-speed offerings are virtually regulation-free, while the comparable service of phone companies, DSL, is mired in heavy-handed rules written for voice services. Because DSL is singled out for an avalanche of regulations (including requirements that we share our infrastructure with competitors at below-cost prices), phone companies are deterred from investing aggressively in a truly national 21st-century Internet.

We have the opportunity today to end this separate and unequal treatment if Washington chooses wisely between two broadband proposals currently under consideration in the U.S. Senate. The first is a throwback to the past. With the nation in the grip of recession, Sen. Ernest Hollings (D-S.C.) suggests a multibillion-dollar big-government program. Adamantly opposed to equal regulatory treatment for phone companies, his proposal protects the current regulatory disparity, opting instead to subsidize state-sponsored telecom networks, something Thierer rightly warns is unlikely to keep the United States on technology’s leading edge.

Fortunately for taxpayers, the forward-thinking alternative, offered by Sens. John Breaux (D-La.) and Don Nickles (R-Okla.), would not cost the U.S. government a dime. The Broadband Regulatory Parity Act simply guarantees DSL the same minimal regulatory treatment as cable and wireless high-speed offerings. By ensuring equitable treatment of all broadband investments, this bill would encourage businesses, rather than taxpayers, to aggressively finance the fulfillment of America’s “need for speed.”

It’s a straightforward solution. And, as Thierer points out, its opponents are largely companies that “prefer not to compete.” Although I understand these companies’ desire to maintain their unfair advantage, certainly the nation has a strong interest in seeing the maximum number of companies and platforms vying for customers and investing rapidly in our broadband future.

Given the urgent need for Washington to acknowledge the importance of basic regulatory fairness, I truly appreciate Thierer’s cogent explanation of how a technology-neutral broadband policy will benefit not merely local phone companies but consumers and the U.S. economy. By raising awareness, hopefully this article will intensify the pressure to deliver what all companies in competitive markets deserve: equal treatment from their government.

WALTER B. McCORMICK, JR.

President and CEO

U.S. Telecom Association

Washington, D.C.


Thirty years ago, the Federal Communications Commission (FCC) decided to require telephone companies to make their networks available to computer and data services on a nondiscriminatory basis. Fifteen years later, regulated open access interacted with the open architecture of the Internet to create the most dynamically innovative and consumer-friendly environment for information production in the nation’s history.

Over the course of about a decade, a string of innovations–the Web, Web browsers, search engines, e-mail, chat, instant messaging, file sharing, and streaming audio–fueled consumer demand for dial-up Internet connections. Half of all households now have the Internet at home. Unfortunately, the broadband Internet has not provided this same open environment. Cable companies have been allowed to bring their closed proprietary model from the video market into the advanced telecommunications market. In response, telephone companies have resisted the obligation to keep the high-speed part of their networks open. Both cable and telephone companies have a strong interest in slowing the flow of innovation, because they have market power over core products to protect. Both price the service far above costs, which starves new services of resources.

Cable companies, who have a 75 percent market share in the advanced service market for residential customers, do not want any form of serious competition for their video monopoly. They lock out streaming video and refuse to allow unaffiliated Internet service providers to exploit the advanced telecommunications capabilities of the network for new services. Telephone companies, who have a 90 percent market share in the business market, do not want competitors stealing their high-volume customers, so they make it hard for competitors to interconnect with their networks.

Closed networks undermine incentives and drive away innovators and entrepreneurs. In 1996, there were 15 million Internet subscribers in this country and over 2000 ISPs, or about 15 narrowband ISPs for every 100,000 subscribers. Today, with about 10 million broadband subscribers, there are fewer than 200 ISPs serving this market, or about 2 per 100,000 subscribers. In the half decade since high-speed Internet became available to the public, there has not been one major application developed that exploits its unique functionality.

With high prices and few innovative services available, adoption lags. About 85 percent of American households could get high-speed Internet, but only 10 percent do. Since broadband came on the scene, narrowband has added about three times as many subscribers. This is the result of closed or near-closed systems where market power is used to keep prices up and control innovation.

For two centuries, this country has treated the means of communications and commerce as infrastructure, not just a market commodity. A cornerstone of our open economy and democratic society has been to require that roads, canals, railroads, the telegraph, and telephone be available on a nondiscriminatory basis, while we strive to make them accessible to all our consumers and citizens. As digital convergence increases the importance of information flow, we are making a huge public policy mistake by allowing these vital communications networks to be operated as private toll roads that go where the owners want and allow only the traffic that maximizes the gatekeeper’s profits.

MARK COOPER

Director of Research

Consumer Federation of America

Washington, D.C.


Engineering education

Wm. A. Wulf and George M. C. Fisher make a wide range of excellent points in “A Makeover for Engineering Education” (Issues, Spring 2002). There is an increasing need for engineers to be diversely educated. Curricula must be broadened to include knowledge of environmental and global issues, as well as business contexts for design. And, of course, lifelong learning is also crucial.

However, meeting the authors’ goal of increasing the number of engineering graduates–at least from highly competitive universities offering engineering programs–will be extremely difficult. Capacity at almost all of these institutions is limited to the current output, although some growth potential does exist at new engineering programs or smaller schools.

The limited number of available spaces for new engineering students could be increased if more U.S. universities had curricula in engineering. Only a small fraction of U.S. universities now have engineering degree programs. Today’s engineering programs are resource intensive, requiring more in the way of laboratory work and number of credit hours than a typical undergraduate degree. Currently, a “four-year” engineering degree takes an average of 4.7 years to complete. This intensiveness increases the cost of running an engineering program and makes initiating new engineering programs in our universities difficult. It also acts as a deterrent to potential students who are not willing to narrow their undergraduate experience to the degree that current engineering programs require. Yet many engineering graduates go on to careers in sales, business, or other non-engineering job categories; these graduates benefit from their engineering degree but do not need the intensity or detailed disciplinary training provided by today’s engineering curricula.

Almost all universities with engineering programs have sought or will seek accreditation by the Accreditation Board for Engineering and Technology (ABET) for at least some of their programs. I propose that we encourage universities currently without engineering programs to consider creating a “liberal arts” engineering curriculum not designed to be ABET-accredited. Such a curriculum would allow for a broader range of non-engineering topics to be studied. Being less engineering-intense, this curriculum could be structured to be completed in an actual four years. It would be aimed at students interested in such activities as technical sales or those who plan on a corporate leadership path in technical firms. An understanding of technology and engineering would be of substantial importance, but with less emphasis on the ability to design or create engineering products. Graduates from these programs who later decide they wish to pursue a more technically oriented career could go on to get a master’s degree at a fully accredited engineering college. Also, if demand for engineers (and engineering salaries) increased, there would be a pool of graduates who could become practicing engineers with a relatively brief period of additional study.

The benefits of this new class of engineering programs would likely include attracting a much wider group of students into the engineering world, such as students who would reject today’s engineering programs as being too narrow and intense. It also would provide a pool of “almost engineers” who could, within a year or so, become full-fledged engineers.

FRANK L. HUBAND

Executive Director

American Society for Engineering Education

Washington, D.C.


I concur with Wm. A. Wulf and George M. C. Fisher’s underlying premise that engineering education needs to be reformed to respond to the 21st-century workplace.

We know that businesses are demanding engineers with skills, including the ability to effectively communicate and to work as part of a team. Surveys of companies employing engineers reveal that although new engineering graduates are well trained in their discipline, they often are not fully prepared for the business environment. Employers tell us they would like to see greater emphasis on teamwork, project-based learning, and entrepreneurial thinking.

The schools are beginning to rethink and reorganize their curricula, though perhaps not as quickly as Wulf and Fisher would like to see. For example, chemical engineering students at Virginia Commonwealth University must have experience in industry before graduating, and Johns Hopkins University students must complete at least 18 credits in the humanities or social sciences.

I can assure your readers that the professional societies are also responding to the changing world of engineering. The National Society of Professional Engineers (NSPE) has developed several resources for students and young engineers, including discussion forums, information links, and education programs, that are available online at www.nspe.org. We have created programs through each of our five practice divisions: construction, education, industry, government, and private practice. For example, through the Professional Engineers in Construction mentoring program, young engineers are introduced to licensed construction professionals and given practical guidance on acceptable construction practices.

The role of engineers in our society and their impact on our daily lives are constantly evolving. Engineers improve our quality of life, and engineering education is the foundation of the engineering profession. By providing greater opportunity for innovation and experimentation in engineering education, we can be assured that tomorrow’s engineers will have the skills needed to meet the evolving world of engineering.

DANIEL D. CLINTON, JR.

President

National Society of Professional Engineers

Alexandria, Virginia


Environmental policy for developing countries

I was delighted to read “Environmental Policy for Developing Countries,” by Ruth Greenspan Bell and Clifford Russell (Issues, Spring 2002). I have long felt that decisionmakers for development assistance should look to market mechanisms as an essential ingredient of development strategies, certainly including the management of environment issues. But I’ve also learned to be very wary of formulas that purport to be panaceas for issues as complex as economic and social development. (I was significantly involved in earlier panacea-seeking: the basic human needs thrust of the early 1970s, appropriate technology later in that same decade, sustainable development in the 1990s, etc.) Reliance on market mechanisms for environmental protection risks a fate similar to that of those earlier fads, at great cost to the need for the ultimate panacea: a wise combination of approaches, including market mechanisms, that fit individual countries’ particular needs and political realities. Bell and Russell have it just right.

I was particularly pleased with the analytical frameworks Bell and Russell use to begin to answer their fundamental question: “”What have we learned about the conditions necessary for effective market-based policies?” Not surprisingly, given their own experience, their focus is primarily on the transition countries of East and Central Europe and the former Soviet Union, but their analysis would have been even more powerful had they cited examples of “bone-deep understanding of markets,” “ensuring integrity,” or “genuine monitoring” in countries with even less developed markets, such as Ghana, Bolivia, or Nepal–let alone Honduras or Burkina Faso.

I also wish that the authors had given the distinguished Theo Panayotou from Harvard an opportunity to speak for himself. For instance, since most of his work and success have come from efforts in countries with particularly well-developed market economies, such as Thailand and Costa Rica (if I recall correctly), I strongly doubt that he would advocate singular reliance on market mechanisms for environmental management in Honduras or Burkina Faso.

But these are minor nits to pick. Bell and Russell have made an important and eminently sensible contribution to the policy debate about environmental policy and management in developing countries. I hope our policymakers will pay attention.

THOMAS H. FOX

Washington, D.C.

Fox is a former assistant administrator for policy and program coordination of the U.S. Agency for International Development.


Russell and Bell provide some valuable insights into the tiered approach to encouraging the use of market-based instruments (MBIs) in environmental management in developing countries. The development community, donor organizations, and policymakers could certainly benefit from them. The authors make the case for the need for tailored approaches to environmental management under different conditions in various countries, but they generalize many other concepts without a thorough assessment of the specific experience with MBIs in different countries. In doing so, they contradict the main point they intend to make.

The article starts out by giving the distinct impression that MBIs are equated with emissions trading. Further, it implies that donor organizations push emissions trading and fail, but the article does not mention the other efforts of the countries themselves or of donor organizations with respect to promoting other MBIs in developing countries. It is not quite clear whether the authors’ complaint is about MBIs in general or complexity in emissions trading in particular. In some places, the article almost seems to conclude that command-and-control systems are superior to MBIs, and there is no need to try sophisticated methods such as MBIs for environmental management in developing countries, where even the regular market does not exist.

The article also gives the impression that there have been no successful applications of MBIs in the developing countries and even doubts that the experience with them in the developed world is sufficient to draw convincing conclusions. In fact, we have seen very successful localized applications of MBIs. The article agrees that there could be such cases, but unfortunately, it does not classify them as successes because of their smallness or localized nature. This does not tally with the statement that donor organizations do not currently recognize the variations in conditions that should be considered to promote successful MBIs. Experience shows that this variability occurs not only from country to country but even from place to place or region to region within one country. Understanding of such diversity is the reason for the localized successes in uses of MBIs.

I think that not all activities of donor organizations can be seen by an external reviewer. Much study has been devoted to reviewing past experience and current situations. For example, the Asian Development Bank (ADB) stresses capacity building in its projects and tackles institutional issues in its policy dialogue with governments. Pilot projects are promoted before wide-scale implementation in order to avoid wasting resources. ADB’s approach to environmental work in many developing member countries is a case in point. On average, from 1991 to 2001, ADB has provided lending support for about $764 million worth of environmental improvement projects in its developing member countries. Environmental management strategies and polices promoted in these projects are a mix of command-and-control and MBIs. ADB is currently supporting the pilot testing of air emissions trading through technical assistance grants as part of environmental improvement projects. ADB does not push emissions trading within countries other than by providing information and expertise in understanding the merits of such MBIs. It does provide technical assistance for examining the whole breadth of existing MBIs and the potentials for expansion.

Again, let me say that the main point of the article has many merits and holds valuable suggestions that can shape the future promotion of MBIs. However, it clearly underestimates the wide experience with MBIs, particularly in Asia. Much more patience is needed to digest the worldwide experience on the subject.

PIYA ABEYGUNAWARDENA

Manila, Philippines


The car of the future

“Updating Automotive Research” by Daniel Sperling is insightful and timely (Issues, Spring 2002). In connecting the government’s earlier attempts to improve the efficiency of personal vehicles through the Partnership for a New Generation of Vehicles (PNGV) to the recently announced FreedomCAR, the author raises important questions regarding the effectiveness of the policies behind both initiatives.

Although PNGV is now history, a debate continues as to what, exactly, it was and what it accomplished. Sperling’s account is faithful on both counts. PNGV was morphed into FreedomCAR in March of this year. At that point, PNGV had run for about seven and a half years of its projected 10-year lifetime. It was increasingly apparent that the technologies considered necessary to create a production prototype of an 80-miles-per-gallon sedan had been identified and developed to the point where the dominant remaining issue remaining was their affordability. Spending taxpayer money to improve the affordability of automobile components was increasingly harder to justify.

So the “sunsetting” of PNGV was entirely logical. On the other hand, the reluctance of the Big Three auto manufacturers to commit to a car that incorporated its technologies was a disappointment to all who participated in the program. Especially so since some Japanese car manufacturers offered cars with such technologies.

In recent years, PNGV support for fuel cell­related technologies increased significantly. Although there is currently much publicity regarding fuel cell vehicles, it is important to note that they are at a Model T stage. As pointed out by Sperling, there are many challenging technology issues to resolve before we will see many of these on the road. Therefore, the renaming of the government’s automotive technology program and the sharpening of its focus on fuel cell research is very much in keeping with the government’s role in pursuing long-range, more fundamental research.

What is needed that the government isn’t doing under the FreedomCAR program? To Sperling’s list, which I endorse, I add an extensive field evaluation program of promising technologies. This could be done within the government’s annual vehicle procurement program. The various fuel cell vehicle types could be assigned to federal facilities, national labs, military installations, etc., to be evaluated under controlled conditions. The experience gained would, among other things, reassure the buying public that the technical risk of fuel cell vehicles had been thoroughly evaluated and minimized.

One quibble with Sperling’s paper: He states that “PNGV was managed by an elaborate [emphasis mine] federation of committees . . .” Anyone involved in PNGV would question whether it was managed from above; guided, maybe. The method of operation was to decentralize to the working level, which was a dozen or so Technical Teams maintained by USCAR, the industry coordinating organization. Management meetings were infrequent, supplemented by telephone conference calls. There were seven government agencies participating in PNGV. This allowed industry access to a wide range of technologies. FreedomCAR, in contrast, is supported by a single agency, the Department of Energy (DOE). If a single agency is to be selected, DOE is certainly the most appropriate. However, there is still substantial ongoing research within other government agencies that might advance FreedomCAR’s goals. But this would call for the Bush administration to exert leadership at its highest levels, which it is apparently loath to do.

ROBERT M. CHAPMAN

Consultant

The RAND Corporation

Arlington, Virginia


General Motors (GM) agrees with Daniel Sperling’s characterization of the Department of Energy’s FreedomCAR initiative as “a fruitful redirection of federal R&D policy and a positive, albeit first step toward the hydrogen economy.”

We’re excited about FreedomCAR because it should, over time, help harness and focus the resources of the national labs, U.S. industry, and universities to support the development and commercialization of fuel cell vehicles. Shifting to a hydrogen-based economy is a huge undertaking. Sperling correctly points out that government will continue to have an important role, as will the energy companies and automakers.

As the world’s largest automaker, GM takes its role in this endeavor very seriously. We know cars and trucks. We know how to build, design, develop, and sell them. And the automotive industry contributes significantly to the global economy.

Sperling’s article concludes with several good suggestions for hastening the day when fuel cell vehicles are regular sights on the nation’s roads and highways. Although his specific recommendations dealt with funding issues for key stakeholders in developing the technology, many of the practices the money would support and other ideas to which Sperling referred are already in place at GM.

GM has focused intently on the fundamental science of fuel cells and has invested hundreds of millions of dollars in fuel cell research, because we believe that there are certain technologies that we must own in order to control our destiny. We also are working in partnership with other automakers and have developed key alliances with innovative technology companies, including General Hydrogen, Giner Electrochemical Systems L.L.C., Quantum Technologies Worldwide, Inc., and Hydrogenics Corp. In addition, we are working with dozens of other suppliers on various fuel cell components.

GM has also engaged the energy companies in developing gasoline-reforming technology for fuel cell applications. A reformer extracts hydrogen from hydrocarbons, such as gasoline and natural gas, to feed the fuel cell stack. Our North American “Well-to-Wheels” study conducted with ExxonMobil, BP, Shell, and the Argonne National Laboratory showed that reforming clean gasoline either onboard a vehicle or at gasoline stations can result in significantly lower carbon dioxide emissions. We’re working with energy companies to develop this concept as a bridging strategy until a hydrogen refueling infrastructure can be developed. We are also pursuing stationary applications for our fuel cell technology to provide clean, reliable electricity for businesses while increasing our cycles of learning.

In July 2002, GM will open a new 80,000-square-foot process development center at our fuel cell research campus in upstate New York. The facility, which will be staffed with up to 100 employees, will allow us to determine the materials and processes necessary to mass-produce fuel cells.

Commercialization is well within sight, even though much R&D remains. GM is working hard on our own fuel cell program, and we also fully support a broad-based public policy strategy to accelerate the industry’s progress along this exciting path.

LAWRENCE D. BURNS

Vice President

General Motors Research & Development and Planning

Warren, Michigan


Daniel Sperling succinctly summarizes and puts into perspective the developments in the United States, Europe, and Japan that have led to FreedomCAR. This is indeed a positive first of many more steps that will be needed in the long march to a sustainable energy economy. One might have expected such moves from a Democratic administration, not from a White House run by a conservative Texan from a political party that in the past has been closely identified with Big Oil and other fossil energy sources, and with long-standing hostility to the intertwined issues of man-made greenhouse gases and global warming.

It has been speculated that Energy Secretary Spencer Abraham, a former senator from Michigan with close contacts to Detroit’s auto industry, was persuaded to embark on this hydrogen initiative by the automakers, who over the past decade have invested billions of dollars in hydrogen and fuel cell R&D. He also may have had advice from Robert Walker, reportedly a friend of Abraham’s, a former Republican congressman from Michigan, former chairman of the U.S. House of Representatives’ Science Committee and for years the only visibly vocal Republican hydrogen champion (Walker authored early key legislation, the 1995 Hydrogen Future Act ) in Congress. (To be fair, other than the late George Brown Jr. in the House and Sen. Tom Harkin in the Senate, there weren’t that many outspoken Democratic hydrogen supporters either.)

As to the Partnership for a New Generation of Vehicles’ (PNGV’s) “boomerang effect” on the foreign competition, it is not clear to me that, in the case of Daimler-Benz at least, this was in fact the motivating factor for the company to start its fuel cell program, which, as Sperling points out correctly, spawned the major efforts by GM and Toyota and eventually by most other carmakers. PNGV and the Daimler-Benz/Ballard venture got underway almost in parallel: The Daimler-Benz/Ballard pact was first reported in May 1993, and the formation of PNGV was announced four months later. Rather, it looked more like a logical relaunch of the company’s foray into hydrogen that began in the 1970s. The first hydrogen-powered Daimler-Benz internal combustion­engined minivan was shown at the 1975 Frankfurt Auto Show. These efforts reached a peak of sorts with a four-year test of 10 dual-fuel (hydrogen and gasoline) internal combustion vehicles in West Berlin that ended in 1988 after 160,000 miles.

Nor am I sure that Sperling is on the mark about automakers’ reluctance to expand industry engagement to energy companies. In its press releases, GM routinely points to joint research with energy companies, and both DaimlerChrysler and Volkswagen recently announced pacts with chemical process companies to develop clean liquid designer fuels for fuel cell vehicles. Conversely, most large oil companies have set up divisions (Shell Hydrogen is one example) to work on fuel cells and hydrogen.

But these are minor quibbles. Sperling is correct in his call for government to play an important role in commercializing fuel cells: The removal of institutional barriers (including tax breaks and eased environmental rules for zero-emission vehicles and facilities) as a purchaser of hydrogen/fuel cell vehicle fleets come to mind. Also, government–national, state, local, and regional–must assist in setting up a fueling infrastructure, something that is now getting started: The Department of Energy is creating regional stakeholder groups to come up with recommendations, and California’s South Coast Air Quality Management District has drawn up plans for an initial small string of hydrogen fueling stations in the Los Angeles Basin.

Overall, Sperling is to be commended for pulling together and presenting a number of critical issues affecting this momentous shift in not only America’s but the world’s energy systems; after all, it’s not American warming, or Japanese warming, or European warming but Global warming that we’re fighting.

PETER HOFFMANN

Editor and publisher

The Hydrogen & Fuel Cell Letter

Rhinecliff, New York


Coral reef pharmacopeia

Bravo to Andrew Bruckner for providing a balanced and accurate assessment of the enormous biomedical resources that can be derived from the unique life forms found on coral reefs (“Life-Saving Products from Coral Reefs,” Issues, Spring 2002). His article calls for increased attention to the development of marine biotechnology within the United States and, rightly, comments further on the issues of management and conservation of these highly diverse, genetically unique resources. Although U.S. funding agencies have not heavily invested in marine biomedicine, arguably U.S. scientists have nonetheless consistently remained at the forefront of this science. The difficulty has been that programs have been lacking that link marine exploration and discovery with the significant experience and financial resources needed to develop drugs. Because of this, it can be confidently estimated that less than 5 percent of the more than 10,000 chemical compounds isolated from marine organisms have been broadly evaluated for their biomedical properties.

Why is this incredible resource not being used? For complex, valid reasons, the pharmaceutical industry has turned its attention to more secure and controllable sources of chemical diversity, such as combinatorial and targeted synthesis. Traditional studies of natural products, although recognized to generate drugs, require extra time in the collection, extraction, purification, and compound identification processes. The intensity of competition in the pharmaceutical industry has created the need for very streamlined discovery processes to maintain the current rate of new drug introduction. Nonetheless, marine-derived drugs are indeed entering the development process.

The role of American and international universities in drug discovery has been steadily increasing for the past two decades. More and more, the pharmaceutical industry is licensing academic discoveries. Academic scientists with an understanding of the world’s oceans, and with sufficient expertise in chemistry and pharmacology, have explored coral reefs worldwide, yielding many drug discoveries that are now in clinical and preclinical development. U.S. funding agencies have played a major role in stimulating these activities. The National Sea Grant Program (U.S. Department of Commerce) and the National Cancer Institute [National Institutes of Health (NIH)], have, for more than 25 years, played major roles in supporting marine drug discovery and development. One of the most creative and successful programs is the National Cancer Institute’s National Cooperative Natural Product Drug Discovery Groups (NCNPDDG) program. This program provides research funds for the establishment of cooperative groups consisting of marine scientists and pharmacologists, but the program also includes the participation of pharmaceutical industries. By its design, it creates bridges between scientific disciplines and provides for the translation of fundamental discoveries directly to the drug development process. This program, which has both terrestrial and marine natural products components, is one of the most successful and productive efforts I have observed.

There are new initiatives on the horizon as well. In a recent development, the Ocean Sciences Division of the National Science Foundation and the National Institute of Environmental Health Sciences at NIH have formed an interagency alliance to create a multifaceted national initiative to focus on the “Oceans and Human Health.” This program seeks to create centers of expertise dedicated to understanding the complexities of linking oceans and their resources to new challenges in preserving human health.

Clearly, these activities will increase the U.S. investment in marine biomedicine and biotechnology. But is this sufficient to realize the enormous potential of the world’s oceans? Probably not, but it is a great start toward that end. The ocean and its complex life forms are our last great resource. To overlook the medical advances to be found there would be unwise.

WILLIAM FENICAL

Professor of Oceanography

Scripps Institution of Oceanography

University of California, San Diego

Science and Security at Risk

The marriage between science and security in the United States has at times been turbulent, and never more so than in the fall of 2000, following the darkest hours of controversy over security breaches in the Department of Energy’s (DOE’s) national laboratories. At that time, I was asked by the secretary of energy to head a commission to examine the rekindled issues surrounding science and security at the labs. I knew the problem would be intense. But I thought it would be focused largely on the consequences of the security compromises and DOE’s harsh response, which was partly driven by the partisan politics of Washington. Instead, I found a much more complex and difficult problem. At risk is the vitality of science in some of the best laboratories in the United States. This situation could worsen if the government seizes on wider, poorly designed security measures for the nation in the aftermath of the September 11, 2001, terrorist attacks. If we fail to manage this problem properly, then the risk could spread beyond the government to U.S. universities and private-sector institutes.

I have spent my entire professional career in government, dealing regularly with various national security issues. I think about national and domestic security every day. I worry deeply about terrorism, and especially the consequences if terrorists gain access to chemical, biological, or nuclear weapon materials. And I worry about espionage. I know there are spies within our land.

But I also worry that misplaced and poorly conceived security procedures will provide very little security and could potentially cripple the nation’s scientific vitality, thereby posing a serious threat to our long-term national security. That is the purpose of this article. I will begin by reviewing the work of the Commission on Science and Security and then expand from that to the larger concerns I have during these critical times for homeland security.

At its core, DOE is a science agency. Science underpins each of its four missions, which focus on fundamental science, energy resources, environmental quality, and national security. DOE contributes enormously to science in the United States; it accounts for nearly half of all federal support in the physical sciences, including more than 90 percent of the investments in high-energy physics and nuclear physics. But from the beginning of the U.S. nuclear weapons program 50 years ago, science and security were in tension. The very nature of the scientific enterprise requires open collaboration. The essence of national security is restricted and controlled access to crucial information. We had that tension from the opening days of the Manhattan Project, and we managed it effectively throughout the Cold War.

What causes this tension, I came to realize, is not the incompatibility of scientific openness and security restrictiveness. Instead, the tension arises inside the national security community. Central to our national strategy for more than 50 years have been efforts to harness the nation’s scientific and technical talent to place superior tools in the hands of U.S. soldiers so that we could win any wars and, ideally, deter conflict in the first place. Therefore, national security requirements became a primary impetus for federal spending on science. Defending the United States without the genius of U.S. scientists would be infinitely more difficult.

The tension, then, is internal to the national security community. On the one hand, we need to advance the frontiers of knowledge to stay ahead of our opponents. On the other hand, we need to defeat those who would steal our secrets, keeping them as far behind us as possible in the race to field the weapons of war. Thus, we want to race ahead in one dimension and to slow the progress in another. That is the tension we confronted in DOE when the commission began its work. That, too, is the tension we now feel in a post-September 11 United States. How do we preserve our economic and social vitality and still secure our homeland?

Between 1999 and 2000, DOE was hit by two major security crises at Los Alamos National Laboratory, the home of the Manhattan Project and some of the nation’s most classified national security work. The first case involved Wen Ho Lee, a U.S. physicist of Taiwanese decent accused of giving sensitive nuclear information to China. The Lee case was considered explosive because it involved one of DOE’s own employees. Furthermore, the possibility that China had allegedly obtained, through a naturalized U.S. citizen, access to some of our most sensitive information was alarming. Less than a year later, on May 7, 2000, a second incident occurred at Los Alamos involving two missing computer hard drives containing classified nuclear weapons information. The hard drives resurfaced more than a month later in a classified area that had been searched twice before by investigators. As the magnitude of the second incident crystallized, accusations flew in Congress, DOE, and the security community.

DOE responded by issuing a series of controversial security measures intended to close the security gaps highlighted by the two crises. Although well intentioned, many of these department-wide measures were simply misguided or misapplied. In fact, the measures only exacerbated departmental tensions and contributed to a decline in employee morale, most notably at the nuclear weapons laboratories–Los Alamos, Lawrence Livermore, and Sandia–where security crackdowns, and the perception that the reforms were arbitrarily imposed, were most severe. Because the security measures were blanketed across the department, unclassified laboratories also were affected, even though their security needs were significantly different from those of the weapons laboratories.

With the high-profile allegations and security violations at Los Alamos as a backdrop, Energy Secretary Bill Richardson authorized the Commission on Science and Security in October 2000 to assess the challenges facing DOE and its newly created National Nuclear Security Administration (NNSA) in conducting science at the laboratories while protecting and enhancing national security. The commission was asked to examine all DOE laboratories (not just the three weapons labs where classified work is most concentrated) in order to address the department’s broad range of classified and unclassified activities and information. The commission included 19 distinguished members from the scientific, defense, intelligence, law enforcement, and academic communities. We presented our findings in May 2002 in a final report to Energy Secretary Spencer Abraham, who had taken office with the change in administrations and had rechartered the commission.

DOE needs a philosophy and clear procedures that integrate science and security, rather than treating them as separate functions.

The commission concluded that DOE’s current policies and practices risk undermining its security and compromising its science and technology programs. The central cause of this worrisome conclusion is that the spirit of shared responsibility between the scientists and the security professionals has broken down. Security professionals feel that scientists either do not understand or fail to appreciate the threats and thus cannot be trusted to protect U.S. secrets without explicit and detailed rules and regulations. Scientists, in turn, believe that security professionals do not understand the nature of science and thus pursue procedures designed to demonstrate compliance with rules more than securing secrets. These perceptions have hardened into realities that significantly and adversely affect the trust between scientists and security professionals in the department.

The damaging consequences of this collapse of mutual trust cannot be overstated. It is not possible either to pursue creative science or to secure national secrets if scientists and security professionals do not trust each other. Scientists are the first line of defense for national security. If we do not trust a scientist, then we should not give him or her a security clearance. If we grant a scientist clearance, then we should trust that person’s judgment and help him or her do the assigned job. Of course, the natural complement to trust is verification. Once trust has been established, that trust must be periodically verified by the organization in order to reduce insider threats, negligence, or employee incompetence in security matters. Verification that is transparent, unobtrusive, and selective can bolster security without diminishing productivity or demoralizing personnel.

Scientists cannot be expected to be aware of all the risks they face from hostile governments and agents. They depend on security professionals to establish the environment of security so that they can pursue effective science within that framework. They also depend on security professionals to translate uncertain and occasionally ambiguous information gathered by counterintelligence experts into realistic and effective security procedures. As the same time, security professionals can understand what is at stake only by working with scientists. And so the entwined needs of scientists and security experts come full circle. These two communities depend on each other to do their shared job successfully.

Key strategic elements

There are many problems standing in the way of that ideal working environment. After conducting extensive research and discussion, the commission outlined five broad elements of a comprehensive strategy for creating effective science and security in DOE’s labs. All have to do with developing a security architecture that is consistent with an environment in which, during the past two decades, both the conduct of science and the international security landscape have changed considerably.

To begin with, science has become an increasingly international enterprise. Multinational collaborative efforts on large science projects are now common, if not the norm. Within the government, classified science, once an isolated and compartmentalized endeavor, has come to rely on unclassified science as a vital new source of ideas. There is greater fluidity in the exchange of information that is accompanied by a need for U.S. scientists to work with scientists from around the world. As a result, global scientific networks have grown exponentially through the use of modern communications and information technologies. People also are more mobile, and scientists from developing countries have made their way to developed countries in search of better facilities and research environments. In the United States, increasing numbers of scientists, engineers, and mathematicians from other countries are filling slots in doctoral programs, laboratories, and businesses. In DOE’s unclassified laboratories, for example, the number of foreign students, as a portion of total staff, increased from 16 percent to 19 percent between 1996 and 2000.

As science has changed, so has our security environment. Since the end of the Cold War, our security priorities have shifted from a largely bipolar world to an increasingly complex world with asymmetric threats to U.S. interests. September 11 and the anthrax attacks have forced us to redefine and rethink the nature of risk to our national assets. Indeed, we have come to understand that zero risk is impossible; any system based on the presumption of zero risk is bound to fail. We can only minimize risks through careful calculation and analysis of threats.

Given this context, the first and arguably the most difficult element of a new security architecture requires the Energy secretary to confront the longstanding management problems of the department. Many well-intentioned reform efforts, piled on top of an organizational structure that traces back to the earliest days of the Manhattan Project, have created an organization with muddy lines of authority. The fundamental management dysfunction of DOE predated the security scandals at Los Alamos. The security “reforms” imposed administratively and legislatively in the aftermath of the security scandals made the problems dramatically worse. Therefore, the commission’s first recommendation is that the Energy secretary needs to clarify the lines of responsibility and authority in the department. This means creating not only smaller staffs but also clean lines of authority and new procedures that limit the endless bureaucratic wrangling in the department. There will always be tension among headquarters, field offices, and individual laboratories. These tensions need to be channeled into clear and predictable bureaucratic procedures that have a definable start and finish. Today, the losers in one bureaucratic skirmish merely advance to new firing positions and pick up the battle all over again.

Although these organizational issues seem arcane, it is absolutely necessary to have clear lines of authority in order to have sound security. For example, how can a counterintelligence officer be effective if he or she has two supervisors or none at all? How can emergency situations be managed properly when it is unclear who is in charge or, perhaps worse, if too many people think they are in charge? In the commission’s field visits to DOE laboratories, we found numerous instances in which there was profound confusion over the chain of command and responsibility. For example, DOE had two counterintelligence programs: one for the department itself and one for its internal NNSA. Because counterintelligence officers report to separate DOE and NNSA chiefs, there is inevitably fragmentation of information and communication. For a counterintelligence operation, which by its very nature requires informational cohesion to covertly detect spies, this is less than ideal. Among the changes the commission has proposed is assigning a single point of responsibility for counterintelligence in order to create a unified operation within DOE.

Second, DOE needs a philosophy and set of clear procedures that integrate science and security, rather than treat them as separate functions. The commission believes that the energy secretary should lead the departmental science mission and guide the supporting functions, including security. The directors of individual laboratories hold similar responsibilities at their level, in essence making them the chief scientists and the chief security officers for all laboratory functions. These directors need to have the flexibility to design the science agenda and the security program according to the needs of their laboratories, but they also need to be held strictly accountable for the performance of both. Headquarters organizations need to define policies but stay away from prescriptive formulas for how individual labs and offices should perform those functions. Headquarters organizations also should set standards of accountability and monitor performance.

The issue of integrating science and security at DOE is part of a much larger need for science to be included as a central component of security-related decisions throughout the government. Scientists must be part of the security solution. If they are not included, then we risk eroding the effectiveness of our security approach from the inside out. Without strong participation from scientists, we also risk losing top scientific talent from projects related to national security and even from unclassified laboratories, where the bulk of the nation’s fundamental scientific research is performed. The commission believes that DOE can better include scientists in the security decisionmaking process by adding them to headquarters and laboratory-level advisory boards, establishing rotating policy positions for scientists from the laboratories and developing new ways to link scientists and security in assessing risk and threats.

Third, DOE must develop and deploy a risk-based security model across the department’s entire complex. The sensitivity of activities is not uniformly distributed through every office and facility. As such, security rules should not be one size fits all. Instead, there are a small number of very sensitive “islands” of national security-related activity in an “ocean” of otherwise unclassified scientific activity. We need to protect the islands well, while not trying to protect the whole ocean and thus inadequately protecting its islands.

A risk-based security model for DOE needs to accommodate the complex nature of science today. As mentioned, science is increasingly collaborative, with research teams around the world that are increasingly connected through high-speed, high-capacity data channels. These teams will have U.S. citizens and foreigners working side by side. Securing critical secrets in this environment will be extremely challenging. The worst thing we can do, however, is throw a smothering blanket of regulation over the entire enterprise and chase away creative scientists from our labs. I am convinced that scientists will protect secrets if security procedures are clear and if the scientists are included in the policy process. But scientists are like the rest of us when they must endure security procedures that are arbitrary and easily subverted or skirted. Security professionals, therefore, need stronger skills and resources to design and convey effective security procedures. In essence, a risk-based model enables security professionals to protect what needs protecting.

Fourth, our security professionals need help in modernizing their security approach, and they need new tools and resources to do it. Like many parts of the intelligence community, DOE tends to inadvertently undercut its own capacity to implement a modern security model by providing inadequate tools to its security and counterintelligence professionals. In some instances, security and counterintelligence professionals are constricted by outdated systems and modes of thinking that are entrenched in years of shortsighted policy. For example, DOE’s analytic efforts are frequently undercut by a Cold War posture that emphasizes a rigid case-by-case approach, meaning that comprehensive data analyses are performed only when an incident provokes them.

Intellectually undisciplined categories such as sensitive unclassified information can harm security rather than help it.

The case-by-case approach is essentially reactive and does not employ continuous analysis as the best preemptive tool against spies and insider threats. The approach also fails to take maximum advantage of data and resources available in the laboratories and from the scientists themselves, who often collect routine data on visiting researchers that may be useful for counterintelligence purposes. These problems in DOE are symptoms of broader analytic disconnects in our intelligence apparatus that have received greater attention since September 11. The commission suggests a number of possible tools and techniques to assist in the development of a risk-based model.

The job of security professionals is to develop modern risk-based security models appropriate to the complex environment of the department and its laboratories. But they also need new kinds of training and analytic skills that are currently rare, particularly in the counterintelligence community. They need new high-technology security tools and analytic skills to design these security models and adapt them to an ever-changing dynamic work environment. For example, there are now biometric, personnel authentication, and data fusion systems that would be helpful not just to DOE’s security and counterintelligence work but also to broader government efforts to harness intelligence data from disparate sources. DOE could benefit tremendously from these state-of-the-art tools. Yet historically we do not honor our security professionals with the resources and support they need and deserve until we have a disaster, and then we invest too late. We need to invest now, but invest wisely.

Finally, DOE needs to devote special attention to cyber security. Although the department has always been computationally intensive, the digital revolution in the department has been sweeping. Like the rest of government and society, far more attention has been given to sharing information than to protecting the computing environment from malicious action. This becomes critical, because it is now dramatically easier to steal U.S. secrets by downloading files electronically rather than by covertly taking pictures of individual pages of drawings, as was the case for spies of an earlier era. DOE has devoted too little attention to cyber security. Although the commission found that the Energy secretary has already initiated steps in the right direction, there is precious little time to waste in this important area.

Microcosm of national problems

DOE, I believe, is a microcosm for the challenges facing the United States in the aftermath of September 11. What so shocked most U.S. citizens was the realization that the suicide terrorists lived among us for months planning their terrible work. We recognized that we were victims, in part, of the very features we cherish most in the nation’s way of life: a dynamic and energetic social environment, a freedom of movement and a privacy in our personal lives, and a nation increasingly interconnected with a wider world.

Now, homeland security has risen to the top of the government’s agenda. I strongly agree that it should. The first business of government is to protect its citizens from harm, caused by forces without or within. But I do not want to lose that which I love in this country, and I do not want our collective lives to be impoverished by security procedures that bring inefficiency without providing security. How do we protect ourselves from the various dark forces without becoming a police state? I will accept whatever it takes to protect the United States, but I also want to design those security procedures so that they do not sacrifice the values and the opportunities that make the United States unique.

Today, there are a number of efforts to protect and restrict access to scientific information. These include efforts to restrict the activities of foreign nationals, limit information already in the public domain, expand the use of “sensitive unclassified information,” broaden enforcement of “deemed exports,” and impose new restrictions on fundamental research. I believe we are at risk of duplicating some of the early mistakes we saw at DOE in several ways.

First, as an example of limitations on foreign nationals, the Department of Defense is circulating a draft regulation that would prohibit noncitizens from working as systems administrators for unclassified computer systems that are deemed “sensitive.” The regulation would apply to government employees and contract employees. However, there is no clear definition of what constitutes a sensitive computer system. Without a clear definition, any local security official could designate a system as sensitive. Such a regulation would enormously complicate the task of finding qualified personnel, practically necessitating an equivalent of security clearances for individuals who would not come close to classified work. We cannot afford to alienate noncitizens from unclassified U.S. scientific and technical enterprises, particularly because we are unable to supply enough U.S. scientists and technical experts to support our growing national needs.

Second, within the government there are efforts to narrow the body of publicly available government information, including documents that have been available for years on the Internet. Similarly, since September 11 there have been calls for expanded use of “sensitive unclassified information” and other ambiguous categories of information. I agree that there is a need to make sure information on Web sites and other public venues does not include information that might compromise our security. I also believe that certain information that falls somewhere in the gray area between classified and unclassified also should be controlled. But we must be clear: If information truly requires protection, it should be classified or protected by proper administrative controls that are based in statutes and have clear definitions for use.

Within the context of DOE, the commission witnessed how intellectually undisciplined categories, such as sensitive unclassified information, can harm security rather than help it. Sensitive unclassified information has contributed to confusion for scientists and security professionals alike in DOE and has resulted in the proliferation of homegrown classification labels in the laboratories. Indeed, the commission sees it as a category for which there is no usable definition, no common understanding of how to control it, no meaningful way to control it that is consistent with its level of sensitivity, and no agreement on what significance it has for national security.

Third, the “deemed export” problem has also become worse. According to the federal Export Administration Regulations, deemed exports are defined as any communication of technical data or source code that is “deemed” to take place when it is released to a foreign national within the United States. This category includes technical information or data provided by verbal means, mail, telephone, fax, workshops and conferences, email, or other computer transmission. The underlying concept of deemed exports applies to “sensitive” technologies, but there is no good definition of what constitutes sensitive technologies. Unfortunately, such broad criteria have been adopted that they could apply to almost any new and promising technical development. And since no clear definition exists, laboratory personnel are narrowing the scope of international cooperation in fear that they may be violating deemed export regulations.

Fourth, we must be wary that efforts to protect classified activities do not unintentionally compromise fundamental research as well. As mentioned, classified science relies increasingly on unclassified science as a source of innovation and ideas. The commission believes that there is a strong need for clarification in the protection of information that is produced as a result of basic research within DOE and throughout the government. In particular, we call on President Bush to reissue National Security Decision Directive 189 (NSDD-189). First issued in 1985 by President Reagan, NSDD-189 is a solid framework for protecting fundamental (unclassified) research from excessive regulation. The directive states that fundamental research is generally exempt from security regulations, and that any controls can be imposed only through a formal process established by those regulations. Although the directive remains in force today, too few government and security management professionals know about it or use it as a guide. In this time of heightened security, reissuing NSDD-189 would be a small but significant step in providing guidance to government organizations for striking a healthier balance between open science and national security needs.

We must be wary that efforts to protect classified activities do not unintentionally compromise fundamental research as well.

Although I understand and support the need for stronger security procedures, I see too many instances of inappropriate security procedures that are adopted in haste by government officials who fear criticism for inaction. In today’s increasingly dynamic society, security demands a disciplined, sophisticated analysis. I fear that without such an approach, heightened security restrictions will narrow the scope of creative interaction of U.S. scientists and technical personnel. Currently, there are many other areas where we are debating the merits of the right security methodology, including restrictions on student visas, mandatory biometric tags on passports, restrictions on drivers’ licenses for noncitizens living legally in the country, and access to biological agents, to name just a few.

Careful attention to security after the September 11 attacks is justified and appropriate. But we must not adopt hastily conceived security procedures with insufficient thought and design as an expedient in these urgent times. I believe that is precisely the mistake we saw at DOE. The Commission on Science and Security was assembled to examine the security policies and procedures made in that climate of fear and criticism. I worry that we are on the verge of making comparable mistakes now that would apply more generally in the United States. Now is the time to move to protect the country, but this must be done with prudent, reasoned security measures that provide the right tools and technologies for security professionals and preserve the openness and strength of our scientific institutions. This time, we all have a stake in getting it right.

From the Hill – Summer 2002

Debate over meaning of “sound” science heats up

Scientific research has played a critical but not always clear role in recent debates over energy policy, with advocates on all sides of the issues marshaling “sound science” to defend their position against the “junk science” of their opponents. The battles have been particularly heated recently, with the Bush administration’s support of the proposed Yucca Mountain nuclear waste repository in Nevada as well as its support of oil and gas drilling in the Arctic National Wildlife Refuge (ANWR) in Alaska.

Since 1982, the government has been evaluating the feasibility of Yucca Mountain as the permanent storage site for the nation’s high-level radioactive waste. According to Energy Secretary Spencer Abraham, who has stressed the importance of “sound science” in deciding Yucca Mountain’s future, the suitability of the site is supported by more than $4 billion and 20 years of scientific research.

However, opponents of the site, including environmentalists, some scientists, and Nevada’s governor and congressional delegation, have questioned whether the time and money spent are valid indicators of the quality of the science. In a recent article in Science, for instance, Rodney Ewing of the University of Michigan and Allison Macfarlane of the Massachusetts Institute of Technology, argued that “political pressure to resolve the issue . . . now drives the decisionmaking process at the expense of the science required to support this important public policy decision,” and concluded that “moving ahead without first addressing the outstanding scientific issues will only continue to marginalize the role of science.” By contrast, the Department of Energy (DOE), Nuclear Regulatory Commission (NRC), International Atomic Energy Agency, and other groups have stated that although some technical issues remain, the science is sound enough to support moving to the next stage of the development process.

Regardless of the quality of the science, the political pressure to move forward on Yucca Mountain is enormous. Not only are many of the nation’s nuclear power plants, which are scattered across the country, running out of onsite storage space, but some of them are also suing DOE to recover storage costs incurred since 1998, when DOE had originally contracted to begin removing the waste. According to current projections, the Yucca Mountain repository will open in 2010 at the earliest. As a result, both the administration and Congress, with the exception of the Nevada delegation, have strong political and economic motivations to select the site regardless of the remaining scientific questions.

The House recently voted to override the Nevada governor’s veto, 306-117, and the Senate is likely to follow suit. If it does, DOE will be able to proceed with submitting a licensing application to the NRC, at which point further evaluations and public comment as well as further debates over the soundness of the science will take place.

The issue of whether or not to allow drilling in ANWR was one of the most contentious issues in the debate over the Senate energy bill, which was passed on April 25, almost 10 months after passage of a House energy bill. The Senate bill would continue the ban on drilling anywhere in the 19-million-acre refuge, whereas the House bill authorizes drilling on 2,000 acres along the refuge’s coastal area. Resolving this difference in conference will be, at best, a painful process.

Critics argue that drilling will harm local populations of caribou, polar bears, and other arctic wildlife, some of which are important economic resources for native groups. Proponents counter that new, low-impact drilling techniques will cause minimal damage to the refuge and that the project will significantly boost the U.S. energy supply and raise the standard of living of native groups who own rights to some of the oil-rich land. All of the groups, regardless of their position, have attempted to use scientific research to bolster their positions, and each has given the research its own particular spin.

The U.S. Geological Survey (USGS) estimates that 4 billion to 12 billion barrels of economically recoverable oil are buried in the refuge, most of them in the northwest corner of the coastal area near the Prudhoe Bay oil fields. Although these estimates have been mostly immune to such debates, estimates of the potential impact of drilling on wildlife, which are less certain and more emotionally charged, have become focal points in the controversy. In early April, for example, USGS released a report suggesting that ANWR’s wildlife would be adversely affected by drilling under a number of different development scenarios. The report appeared to contradict earlier testimony by Interior Secretary Gale Norton, who responded to its release by requesting a supplementary report. That report, completed in less than a week, analyzed the effect on caribou populations of drilling limited to the 2,000-acre footprint specified in the House energy bill, a scenario not included in the original study.

When the supplementary report, which concluded that limited development would have no adverse impact on caribou, was released, proponents of drilling saw it as further evidence that limited development would be environmentally harmless. Opponents, however, saw it as evidence that the administration was manipulating the scientific process to support its political and economic goals.

Doubling of NSF budget over five years proposed

House Science Committee Chairman Sherwood L. Boehlert (R-N.Y.) and Research Subcommittee Chairman Nick Smith (R-Mich.) have proposed a National Science Foundation (NSF) reauthorization bill that would set the agency on a track to double its budget over the next five years.

Calling NSF research critically important to the economy, national security, health, and education, Boehlert presented the bill (H.R. 4664) at a May 7 press conference alongside a bipartisan group of cosponsors, including the ranking member of the research subcommittee, Rep. Eddie Bernice Johnson (D-Tex.). The legislation would provide annual 15 percent increases for NSF over the next three years, boosting its budget from $4.8 billion in fiscal year (FY) 2002 to $7.3 billion in FY 2005. If the budget continued on this trajectory, it would reach $9.6 billion in FY 2007, twice the total for FY 2002.

“Congress has quite properly committed to doubling the budget of the National Institutes of Health,” Boehlert said. “But NIH does not and cannot fund the full range of research activities the nation needs to remain prosperous and healthy. NSF has the broadest research mission of any federal science agency and the clearest educational mission. It needs the funding that goes with that expansive– and expensive–mandate.”

The funding provided by the bill would be spread fairly evenly across the agency, with mathematics and nanotechnology research singled out for particularly large increases. The legislation would also encourage greater transparency in procedures for selecting major research projects and better cooperation with the National Aeronautics and Space Administration in funding astronomy research.

The scientific community has responded enthusiastically to the proposal. Earlier in 2002, the Coalition for National Science Funding, which represents more than 70 scientific and engineering societies and universities, recommended a 15 percent budget increase for NSF in FY 2003.

The bill’s aims, however, cannot be achieved without the support of the Appropriations Committee, and key House appropriators have not embraced the goal of doubling NSF funding.

In the Senate, on the other hand, the idea has widespread support. Sens. Barbara Mikulski (D-Md.) and Kit Bond (R-Mo.), the chair and ranking member of the subcommittee that allocates funds for NSF, have lobbied hard in recent years for more NSF funding. In 2000, more than 40 senators signed a letter circulated by Mikulski and Bond supporting the doubling of the agency’s budget.

The Bond-Mikulski push follows Senate passage in 2000 of an authorization bill to double federal funding for all civilian R&D and a January 2001 recommendation by the Hart-Rudman commission on national security to double the entire federal R&D budget by 2010. In addition, Senate Majority Leader Thomas A. Daschle (D-S.D.) recently called for doubling civilian R&D funding.

Asked whether the Office of Management and Budget supports his bill, Boehlert said that he is involved in discussions with the White House and expects “no major difficulty moving forward on the course that we’re charting.”

Government to boost scrutiny of foreigners studying science

The U.S. government has taken some steps to close loopholes in the immigration process that apparently allowed some of the September 11 terrorists to enter the country. They include additional scrutiny of foreigners studying in science fields.

On May 14, President Bush signed into law a border security and immigration bill that, among other things, will require tamper-proof visas and passports for all foreign visitors as well as the use of biometric methods for detecting potential terrorists. On May 10, Attorney General John Ashcroft announced a new regulation to implement the Student Exchange and Visitor Information System (SEVIS). SEVIS will replace the paper system for tracking foreign students with an electronic system.

The new SEVIS database will be launched on a voluntary basis beginning in July; full compliance must be achieved by January 30, 2003. Information to be collected on foreign students includes port of entry, arrival on campus, residential address, and field of study. Although universities and colleges are more or less satisfied with the new electronic system, some concern has been expressed over whether there is enough time to meet the deadline.

In addition to using SEVIS for tracking the comings and goings of foreign students, the White House Office of Science and Technology Policy (OSTP) recently revealed plans to institute an additional level of scrutiny of students who plan to study science and engineering in the United States. A new Interagency Panel on Advanced Science Security (IPASS) will provide a specialized review of F (student), J (postdoctoral), and M (vocational) visas for students pursuing scientific study.

In a briefing to scientific and educational organizations, an OSTP representative said that the goal is “to ensure that international students or visiting scholars do not acquire ‘uniquely available’ and ‘sensitive’ education and training at U.S. institutions and facilities that can be used against us in a terrorist attack.”

IPASS, which will be composed of representatives from defense, civilian, immigration, and intelligence agencies, will review visa applications. Criteria for determining whether a student should be given an IPASS inspection will first be established by the Immigration and Naturalization Service using the Technology Alert List and country of origin. IPASS will then analyze students according to a series of variables to determine patterns. The variables include the student’s educational background, training, and work experience; country of origin; whether the field of study is uniquely available in the United States and sensitive; and whether research conducted elsewhere at the chosen school–beyond the student’s major–could have national security implications.

IPASS is still in the conceptual phase. Issues that need to be resolved include defining “uniquely available” and “sensitive”; how to best implement the IPASS review in an efficient manner; and mechanisms for determining intent to do harm.


“From the Hill” is prepared by the Center for Science, Technology, and Congress at the American Association for the Advancement of Science (www.aaas.org/spp) in Washington, D.C., and is based on articles from the center’s bulletin Science & Technology in Congress.

Memory Faults and Fixes

The sex abuse scandal enveloping the Catholic Church has prompted vigorous calls for action: The Church should hand over to prosecutors a list of all its priests who have ever been accused in the past of sexual abuse; priests should be forced to resign if there has ever been an accusation; courts should devise ways to interpret laws that would allow criminal charges against priests even when the statute of limitation stands in the way; and Catholic bishops should be sued for violating federal antiracketeering laws–the laws that were intended to help dismantle Mafia-run organizations.

No one can fail to be moved by the anguished looks and words of those who recount tales of abuse by priests. But before we rush to adopt the called-for measures, we should look closely at recent news about overturned convictions in the courts and at the growing body of research about human memory. For centuries we have had experience with people who come to court to testify and take the familiar solemn oath. In light of what I have learned about human memory, I propose a more realistic alternative: “Do you swear to tell the truth, the whole truth, or whatever it is you think you remember?”

One has only to look at the growing number of cases in which DNA evidence has been used to exonerate innocent people. This year saw the release of the l00th person nationwide to be freed from prison after genetic testing. Larry Mayes of Indiana, now 52 years old, spent 21 years in prison for a rape of a gas station cashier. The victim had failed to identify him in two separate lineups and picked him out only after she was hypnotized by police. Mayes’ story is a common one; analyses of these DNA exoneration cases reveal that faulty eyewitness memory is the major cause of wrongful convictions.

Issues have also cropped up in cases that are built on the soggy foundation of “repressed” memory. Arizona pediatrician John Danforth faced accusations by a former patient, Kim Logerquist, who suddenly remembered after an interval of two decades that he had repeatedly sexually molested her when she was between 8 and l0 years old. Her memories included a time when after an assault her panties were soaked with blood and she tossed them in the garbage can. At one point Logerquist wanted $3 million to $5 million in damages. Logerquist had been hospitalized 57 times in the three years before her “flashbacks,” memories that she claimed were repressed until triggered by viewing a television ad for children’s aspirin. It is worth noting that Logerquist spent scores of hours in therapy in which she was urged to try to remember abuse that might explain her problems such as self-mutilation, depression, suicide attempts, obesity, and bulimia. Although she periodically denied it, records showed that she often spent time considering which men other than Danforth had abused her. A forensic psychiatrist bolstered Logerquist’s story with the unsubstantiated claim that people who have flashbacks do not later produce inaccurate recollections of those events. Nothing could be further from the truth. Danforth, in his late 60s, steadfastly maintained his innocence and was eventually cleared. It took the last jury less than 40 minutes to find for Danforth, to the delight of his extended family. The loud cheers were not surprising coming from a family that had endured l0 years of litigation as this landmark repressed-memory case worked its way through various trials and appeals.

Thousands of cases based on recovered memory captured public attention throughout the l990s. Some involved highly implausible or impossible memory claims such as intergenerational satanic ritual abuse or abuse at the age of six months. These cases were able to go forward because of changes in the statutes of limitation that permitted people to sue their parents, other relatives, teachers, doctors, and others if they claimed that they now remembered sexual abuse that had previously been repressed. The cases proceeded under the belief that when people are repeatedly brutalized, their memories can be completely repressed into the unconscious and later reliably recovered with hypnosis, dream interpretation, sodium amytal, or other therapeutic “memory work.” In fact, no credible scientific support has been found for such claims.

After seeing the vast array of cases in which people sued their alleged abusers or brought them up on criminal charges in jurisdictions that allowed this, we began to see another sort of psychological and legal phenomenon. A large number of patients who came to believe as a result of questionable therapy that they had been extensively abused later concluded that their memories were false. Often having cut off their ties to family or even sought to destroy their families, many of these “retractors” sued their former therapists for planting the false memories. No tricky statute-of-limitation issues were involved here, as these were handled as traditional medical malpractice cases. The largest settlement to date was $10.6 million against a psychiatrist and major hospital in Chicago for a woman and her two young children who were led to believe falsely that they were victims of satanic ritual abuse and had developed multiple personalities. Even the young children were hospitalized for years under this dubious diagnosis, left to flounder with their incredible set of beliefs and false memories.

Then came the third-party lawsuits. Even when the “patients” had not retracted the beliefs, some family members sued the therapists for planting false memories in the mind of their adult child. The first substantial case to come to national attention involved the Ramona family. The daughter came to believe that her father had raped for more than a decade, memories she acquired when she went into therapy as a sophomore in college. She sued her father, and he in turn sued the therapists who planted these beliefs. A jury in Napa California awarded him $500,000.

Then came the “Daddy-dead” cases. It was inconvenient when Daddy took the stand and convincingly denied any abuse, so some accusers waited until he died and then sued the estate. This left grieving widows and other heirs to defend against the abuse claims that might have dated back a quarter of a century. There were also the civil cases brought against corporations by those who claimed that the newly remembered abuse happened on their premises. They would claim that the alleged abuse took place in a McDonald’s bathroom or on a Royal Caribbean cruise or in the high school art room. Even a well-funded corporation has a difficult time defending against supposedly repressed memories about events that purportedly happened 30, 40, or 50 years ago.

Psychological studies have shown that it is virtually impossible to tell the difference between a real memory and one that is a product of imagination or some other process. Occasionally the memories could be shown to be false because they were biologically, geographically, or psychologically impossible. People remembered extensive abuse by a relative who was not living in the area at the time, or they remembered abuse that was supposed to have happened when they were one year old. The documented cases of false belief or memory illusion make it natural to wonder how it is that someone could come to believe that they had been sexually abused for years, and to even have very detailed memories, if in fact it never happened. Studies of memory distortion provide a clue. If there was anything good that came out of this decade of vitriolic controversy, it was a body of scientific research on memory that could leave a lasting positive contribution, at least in terms of its ability to help our understanding of the malleable nature of our memories.

The science of memory

For several decades, I and other psychological scientists have done research on memory distortion, specifically on showing how memories can be changed by things that we are told. Our memories are vulnerable to “post-event information”: to details, ideas, and suggestions that come along after an event has happened. People integrate new materials into their memory, modifying what they believe they personally experienced. When people combine information gathered at the time of an actual experience with new information acquired later, they form a smooth and seamless memory and thereafter have great difficulty telling which facts came from which time.

More specifically, when people experience some actual event–say a crime or an accident–they often later acquire new information about the event. This new information can contaminate the memory. This can happen when the person talks with other people, is exposed to media coverage about the event, or is asked leading questions. A simple question such as “How fast were the cars going when they smashed into each other?” has led experimental witnesses to an auto accident to estimate the speed of the cars as greater than did control witnesses who were asked a question like “How fast were the cars going when they hit each other?” Moreover, those asked the leading “smashed” question were more likely to claim to have seen broken glass, even though no glass had broken at all. Hundreds, perhaps thousands, of studies have revealed this kind of malleability of memory.

Psychological studies have shown that it is virtually impossible to tell the difference between a real memory and one that is a product of imagination or some other process.

But post-event suggestion can do more than alter memory for a detail here and there from an actually experienced event; it can create entirely false memories. In the past few years, new research has shown just how far one can go in creating in the minds of people detailed memories of entire events that never occurred. Here are some examples.

As researchers, we wanted to find out if it was possible to deliberately plant a false memory. We set out by trying to convince subjects that they had been lost in a shopping mall at the age of five for an extended time and were ultimately rescued by an elderly person and reunited with the family. My colleague Jacquie Pickrell and I injected this pseudomemory into normal adults by enlisting the help of their mothers, fathers, and other older relatives, and by telling our subjects that the relatives had told us that these made-up experiences had happened. About a quarter of the subjects in our study fell sway to our suggestions and were led to believe, fully or partially, that they had been lost in this specific way.

Since the initial lost-in-the-mall study, numerous investigators have experimented with planting false memories, and many exceeded our initial levels of successful tampering. Taken together, these studies have taught us much about the memory distortion process. For example, one group of researchers at the University of British Columbia obtained facts about their subjects’ childhoods from relatives and then attempted to elicit a false memory using guided imagery, context reinstatement, and mild social pressure, and by encouraging repeated attempts to recover the memory. The false memories the researchers tried to plant were events such as suffering a serious animal attack, a serious accident, or an injury by another child. They succeeded in creating a complete false memory in 26 percent of their subjects and a partial false memory in another 30 percent. Another research group from the University of Tennessee planted false memories of getting lost in a public place or being rescued by a lifeguard. With the help of techniques to stimulate the subject’s imagination, they succeeded in 37 percent of their subjects. One false lifeguard rescue memory was quite detailed: “We went to the pool at the N the year we lived there. And my parents were lying by the pool, and I was in the shallow end with this kid I knew. And we started swimming toward the deep end, but we didn’t get very far . . . and I remember he started to go under, and he grabbed me and pulled me under with him. And I remember being under water and then hearing this big splash. He jumped in and just grabbed both of us at once and pulled us over to the side . . . And he was yelling at us.”

Efforts to distinguish true from false memories revealed a few statistical differences. For example, the true memories were more emotionally intense than the false ones and images in false memories were more likely to be viewed from the perspective of an observer, whereas images in true memories were more likely to be viewed from the first-person perspective. However, many of the differences between true and false memories are lessened or eliminated when the false memories are repeatedly rehearsed or retold. The statistical differences were never large enough to be able to take a single real-world memory report and reliably classify it as true or false.

The false memories of lifeguard rescues and other created events were helped along by the encouragement to use imagination. In other studies too, imagination has been a fruitful way to lead people to false memories. In one study, imagination succeeded in getting people to be more confident that as a child they had broken a window with their hand, and in another study imagination helped lead people to remember falsely that they kissed a plastic frog.

Imagination helps the false-memory formation process in a number of ways. Some scientists have used the term “memory illusion” to refer to cases in which people have a false belief about the past that is experienced as a memory. In these cases, the person feels as if he or she is directly remembering some past event personally. By contrast, the term “false belief” applies to the case where the person has an incorrect belief about the past but doesn’t feel as if this is being directly remembered. An insinuation or assertion that something happened can make someone believe that something happened: a false belief. But imagination supplies details that add substance to the belief. Rehearsal of these details can help to turn the false belief into a memory illusion.

One could argue that these studies bear little resemblance to the world of psychotherapy, which was so frequently implicated in the repressed-memory legal cases. To address this, my Italian collaborator Giuliana Mazzoni and I attempted to create an experimental world that would be somewhat closer to the therapy experience. We began with the observation that dream interpretation is commonly used in psychotherapy. From ancient times, dreams have seemed mysterious and frequently prophetic. Modern bookstores are filled with books devoted solely or partly to the analysis of dream material, and some psychotherapists believe (as did Freud) that dream interpretation can lead to accurate knowledge about the patient’s distant past. We wondered, however, whether dream work might be leading not to an extraction of some buried but true past, but to the planting of a false past. In our first dream study, a large pool of undergraduates filled out a questionnaire to screen them about the likelihood of early childhood experiences happening to them. These included being lost for an extended period of time or feeling abandoned by their family before the age of three. We selected students who indicated that these experiences probably didn’t happen to them.

The growing number of wrongfully convicted individuals who have been exonerated by DNA evidence has given the world a real appreciation of the problem of faulty eyewitness memory.

Half of the subjects were selected to participate in what they thought was a completely different study, one that involved bringing a recent or recurring dream with them for analysis in a study of sleep and dreams. These subjects related their dreams to a trained clinician, an individual who happened to be a popular radio psychologist in Florence, Italy, where this first study was conducted. He told the subject about his extensive experience in dream interpretation and how it was that dreams reflected buried memories of the past. He talked to the subject about his or her ideas about the dream report and then offered his own interpretation. His analysis was always the same, no matter what the dream report: The dream indicated that the subject had some unhappiness related to a past experience that happened when the subject was very young and might not be remembered. His suggestions became even more specific: that the dream seemed to indicate that the subject had been lost for an extended time in a public place before age 3, that the subject felt abandoned by his or her family, that the subject felt lonely and lost in an unfamiliar place. He stressed that these traumatic experiences could be buried in the subject’s unconscious memory but were expressing themselves in the dream. The entire session with the clinician lasted about a half hour.

A couple of weeks later the students returned to what they thought was the earlier study and once again filled out the screening questionnaire on their childhood experiences. Control subjects who had not been exposed to any dream interpretation responded pretty much as they had before. The majority of subjects whose dreams had been interpreted by the clinician became more confident that they had been lost in a public place before age 3, that they had felt abandoned by their family, and that they had felt lonely and lost in an unfamiliar place. In a later study we tried to find out more about the phenomenological experience: Did subjects have a false belief or did they have a memory illusion? We found that about half the time our dream-interpretation subjects ended up with a false belief and half the time with a memory illusion.

What is remarkable is that such large alterations of autobiography could be achieved so quickly. A half hour with the clinician is far less than the extensive and repeated dream interpretation that goes on in some psychotherapy that spans months or even years. Because many people enter therapy with the notion that dreams reveal real past events, and some therapists bolster this belief and freely suggest possible meanings, the potential for the personal past to become distorted in this way is very real. This is probably why a number of psychologists are now suggesting that dabbling in dream interpretation can be a dangerous activity. Psychologist Tana Dineen, in an essay entitled “Dangerous Dreaming,” suggested that professionals should not pretend to know what dreams mean or that they reveal anything about the past. These and other therapeutic interventions have been vigorously criticized in recent years because of the science-based fear that they encourage patients to concoct images of false events such as sexual abuse, to suppose that these images must be memories, and to act on them in destructive ways.

More routes to memory

People might think that avoiding certain types of psychotherapy where dream interpretation and imagination exercises are used renders them safe from unwanted intrusions into autobiography, but they should think again. There are other avenues by which fiction can creep into memory structures.

In fall 2000, I delivered a series of lectures in New Zealand and on one occasion offered up the prediction that we would see a rise in cases of demonic possession. I’m not sure that my audience took the news with the seriousness that they should have. But I knew a few things they didn’t know. I knew about some recent findings on demonic possession, and I knew then that the famous film The Exorcist was soon to be re-released.

When I learned that The Exorcist would be re-released, I was prompted to look back at what happened in 1971 when William Blatty’s book by that name was first published, followed two years later by the release of the film. Millions of people saw Linda Blair, as the 12-year old Regan, spewing vomit and waving a bloody crucifix. They saw various priests perform an exorcism on her. What followed were reports of fainting and vomiting during the film, mass hysteria in the form of symptoms of vomiting, fainting, and trembling, and a mini-epidemic of supposed possession. People sought exorcisms in record numbers. In the words of sociologist Michael Cuneo, “Thousands of households across America seemed to become infested all of a sudden with demonic presences, and Catholic rectories were besieged with calls from people seeking exorcisms for themselves, for their loved ones, and sometimes even for their pets.” Cuneo did an interview with Father Tom Bermingham who had played a minor role in the film and received screen credit as a technical advisor: “When the movie came out, I found myself on the hot seat. People saw my face and my name on the screen, and they assumed I was the answer to their problems. For quite a while dozens of people were trying to contact me every week. And they weren’t all Catholics. Some were Jewish, some Protestant, some agnostic, and they all believed that they themselves or someone close to them might be demonically possessed. They were truly desperate people.”

What was going on? In giving visual form to a phenomenon, The Exorcist and other films and stories like it convinced people that possession by the devil was plausible, that possession was more than a possibility. Some people were led even further–to actual belief and symptoms. How could this happen? Can it happen only to people who already think that demonic possession is plausible?

Based on a series of studies conducted with Giuliana Mazzoni of Seton Hall University and Irving Kirsch of the University of Connecticut, we understand some of the process. In the first of these studies, subjects first rated the plausibility of a number of events and gave information about their childhood experiences, including the event of witnessing demonic possession as a child. Later, some subjects read several short articles that described demonic possession, suggesting that it was more common than previously thought, and described typical possession experiences. Subjects also took a “fear profile” in which their particular fears were analyzed; whatever their responses on the profile, they were given the false feedback that witnessing a possession during childhood probably caused those fears. In the final phase of the study, subjects once again rated the plausibility of life events and gave information about their own childhood experiences. Relative to control subjects, those who were exposed to the possession manipulation increased the plausibility of witnessing possession but also made a number of individual claims that it had happened to them.

In follow-up studies, we found that the stories alone could produce some influence and that stories that were set in contemporary culture were more effective than those set in some remote time and culture. Taken together, the studies show that reading a few stories and hearing about another individual’s experience can increase plausibility and make you more confident that something, even something implausible, happened to you. A major point worth emphasizing is that the suggestive material in the study worked not only with people who began with the belief that demonic possession was plausible but also with those who began with the belief that it was rather implausible. The studies constitute the beginning of a recipe for making the implausible seem plausible and sending someone down the road to developing a full-blown false memory.

Back to the prediction I made to that New Zealand audience that demonic possession would soon be on the rise. On September 22, 2000, The Exorcist was re-released with 11 added minutes of original footage. On Halloween, there was a broadcast of Possessed, a TV docudrama about a purported exorcism in a mental hospital. By the end of November, the New York Times was reporting that new exorcism teams had been assembled in response to increased public demand. In New Zealand, I’m receiving a lot more respect. This is an example of how the mass media can mythologize reality. It can show us something we have never seen and might never even have imagined otherwise. In this way it gains a pervasive influence over our consciousness in its power to fashion reality for us.

No escape

Lest you think you might stop watching films and television programs, stop reading magazine stories, and find refuge in the advertisements, that might not help. Even this material has the power to tamper with autobiography. Kathryn Braun, Rhiannon Ellis, and I designed a series of studies in which we used advertising copy to try to plant memories. In one study, subjects filled out questionnaires and answered questions about a trip to Disneyland. One group read and evaluated a fake Disneyland ad featuring Bugs Bunny and describing how they met and shook hands with the character. About 16 percent of the people who evaluated the fake Bugs ad later said that they had personally met Bugs Bunny when they visited Disneyland. Later studies showed that with multiple exposures to phony Disney ads involving Bugs, the percentages rose to roughly 30 percent. The problem is that Bugs is a Warner Brothers character not to be found at Disneyland. Despite the impossibility of this false memory, significant numbers were influenced to remember meeting him and ultimately also became more likely to relate Bugs Bunny to other Disney concepts such as Mickey Mouse or the Magic Castle.

We are not suggesting that advertisers are actually planting false memories deliberately. After all, you would not in reality see an ad for Disney that featured Bugs Bunny. But you might see one featuring a handshake with Mickey Mouse, and this would increase confidence that the viewer personally experienced such a handshake. The memory might be true for some people, but it is certainly not true for all. In this way, the advertisements may actually be tampering with our childhood memories in ways that we’re not even aware of.

What does it all mean?

Medieval and modern philosophical accounts of human cognition stressed the role of imagination. The 18th-century philosopher Immanuel Kant talked about imagination as the faculty for putting together various mental representations such as sense percepts, images, and concepts. This integrative activity bears a great resemblance to what memory actually is and does. We see a film, it feeds into our dreams, it seeps into our memories. Our job as researchers in this area is to understand how it is that pieces of experience are combined to produce what we experience as “memory.” All memory involves reconstruction. We put together pieces of episodes that are not well connected, and we continually make judgments about whether a particular piece belongs in the memory or not. One expects to see shuffling of pieces with a process that works like this.

As scientists work toward understanding how false autobiographical memories come to be, we’ll understand ourselves better, but we will also have a better handle on how such errors might be prevented.

A reconstructed memory that is partly fact and partly fiction might be good enough for many facets of life but inadequate for legal purposes.

What shall we do with all we have learned about the malleable nature of memory? We might start by recognizing that a reconstructed memory that is partly fact and partly fiction might be good enough for many facets of life, but inadequate for legal purposes where very precise memory often matters. It matters whether the light was red or green, whether the driver of the getaway car had straight hair or curly. It matters whether that face is the face of the person who committed the murder. Keep in mind that some 200 people per day in the United States become criminal defendants after being identified from lineups or photo spreads. The growing number of wrongfully convicted individuals who have been exonerated by DNA evidence has given the world a real appreciation of the problem of faulty eyewitness memory, which is the major cause of wrongful conviction Faced with the horror of these recent cases, investigations by the U.S. Department of Justice, the Canadian government, and an Illinois Commission on Capital Punishment have resulted in strong and specific recommendations designed to reduce the prevalence of wrongful convictions. Many of the recommendations reflect a heightened appreciation of the malleable nature of memory.

The U.S. Department of Justice released a 1996 report after analyzing 28 cases of DNA exonerations and concluding that 80 percent of these innocent people had been convicted because of faulty eyewitness memory. The Justice Department then assembled a committee that came up with a set of guidelines for law enforcement. Eyewitness Evidence: A Guide for Law Enforcement offers a set of national guidelines for the collection and preservation of eyewitness evidence. The guide includes recommendations such as asking open-ended questions, not interrupting eyewitness’s responses, and avoiding leading questions. It includes guidelines specifying how lineups should be constructed (for example, including only one true suspect per lineup and including the proper number of “fillers”). The publication, which makes use of psychological findings and explicitly acknowledges that these findings offer the legal system a valuable body of empirical knowledge, is not a legal mandate but rather a document that hopes to promote sound professional practice. Nevertheless, it is apparently having an influence on actual practice, and those who deviate significantly from it are often forced under cross-examination to say why.

The Canadians were also rocked by cases of wrongful conviction, prominent among them the case of Thomas Sophonow. He had been wrongfully convicted of murdering a young waitress who worked in a donut shop and spent nearly four years in prison. An official inquiry was established to investigate what went wrong, to determine just compensation for Mr. Sophonow, and to make recommendations about future cases. Commissioner Peter Cory was eloquent in his description of the suffering of this one falsely accused man: “What has he suffered? . . . He is psychologically scarred for life. He will always suffer from the core symptoms of post-traumatic stress disorder. As well, he will always suffer from paranoia, depression, and the obsessive desire to clear his name. His reputation as a murderer has affected him in every aspect of his life, from work to family relations. The community in which he lived believed him to be the murderer of a young woman, and that the crime had intimations of sexual assault. The damage to his reputation could not be greater . . . His reputation as a murderer will follow him wherever he goes. There will always be someone to whisper a false innuendo. . .In the mind of Thomas Sophonow, he will always believe that people are talking about him and his implication in the murder.” Commissioner Cory awarded $1.75 million dollars in nonpecuniary damages with a total award exceeding $2.5 million. To minimize future miscarriages of justice, the inquiry report on the Sophonow case calls for specific procedural changes in activities such as lineups, as well as more general guidance such as encouraging judges to emphasize to juries the frailties of memory, to recount the tragedies of wrongful convictions, and to readily admit expert testimony on the subject of memory.

A final example comes from Illinois. In March 2000, shortly after Governor Ryan declared a moratorium on executions in the state, he appointed a commission to determine what reforms, if any, would make the state’s capital punishment system fair and just. These activities were prompted in part by the release of 13 men from death row during the preceding decade. Many of these had been exonerated by DNA evidence. Steven Smith had been sentenced to death on the dubious testimony of a single eyewitness. Anthony Porter had been sentenced to death because of two eyewitnesses. They later recanted, and another man subsequently confessed and is now in prison. The commission made 85 recommendations, many of which flowed from a concern about faulty memory. They include training in the science of memory for police, prosecutors, and defense lawyers and the development of jury instructions to educate the jurors about factors that can affect eyewitness memory.

The need for education

These studies all recognize the need for education in order to integrate psychological science into law and courtroom practice. Judges, jurors, attorneys, and police will almost certainly be helped by an increased understanding of human memory. At a minimum, it is important to fully appreciate that false memory reports can look like true ones and that without independent corroboration it is virtually impossible to tell whether a particular report is the product of true memory or the product of imagination, suggestion, or some other process. Judges and juries sometimes think that they can tell the difference, but they are actually responding to the confidence, the detail, and the emotion with which a memory report is delivered. Unfortunately, these characteristics do not necessarily correspond with reliability.

How shall we educate people about the science of memory? It’s not quite as simple as the late Carl Sagan’s exhortation to teach more about the fundamentals of science in school. Education helps, but it has not protected people from embracing unsubstantiated beliefs such as paranormal phenomena, alien abduction, extraterrestrial visitors, telepathy, or communication with the dead. One effort to reduce these types of beliefs that had some early success involved getting students to participate actively in studies that reveal how such claims can be faked. In the current domain, we might consider not just asserting particular truths about memory but actually showing how studies have been done and what findings have been achieved.

Judges and jurors need to appreciate a point that can’t be stressed enough: True memories cannot be distinguished from false without corroboration. Occasionally mental health professionals enter legal cases as expert witnesses and claim that they can tell that a “victim” is telling an accurate story. These purported experts frequently are there to bolster accusations that might otherwise seem strange. Beware of them. As Supreme Court Justice Breyer wrote two years ago in Issues (“Science in the Courtroom,” Summer 2000), “Most judges lack the scientific training that might facilitate the evaluation of scientific claims or the evaluation of expert witnesses who make such claims.” Education can help enhance the appreciation of good scientific information about memory as well as giving judges and jurors the confidence to reject pseudoscientific claims about memory.

Scientific knowledge about memory could be imparted in numerous venues: seminars for judges, law school classes for prospective attorneys, training for police, jury instructions, or expert testimony for jurors. This preliminary and tentative list could be expanded and refined through a cooperative effort by legal and scientific experts to develop a workable program for action. The American Judicature Society, an educational and research organization, recently proposed the creation of an “innocence commission” that would study why the legal system fails in ways that are reminiscent of what the National Transportation Safety Board does when planes crash. A National Memory Safety Board has a nice ring to me.

And what about the priests?

The past decade produced innumerable casualties associated with claims of repressed or dissociated memories. As we cope with the recent revelations about abuse by Catholic priests, is there a lesson to be learned? As Dorothy Rabinowitz of the Wall Street Journal noted, these new revelations bring home the contrast between bogus charges and credible ones. Many victims of priest abuse had long histories of molestation, repeated over and over, with contemporaneous complaints that were recorded, even if they were hidden from the public. Other victims knew all along about their abuse, even if they never talked about it. There are few claims of abuse at age 6 months, or claims of impregnation at the age of 6, or claims of abuse in intergenerational satanic rituals adorning these reports. But just as there was real sex abuse before the bogus repressed memory claims emerged, so there will be a mix of real and false accusations against priests, especially because there is the possibility of cash awards for damages. Not only will deliberate frauds emerge, but there will be “victims” who will, through suggestive therapy or media coverage, come to believe that they have been abused by priests when they have not. Publicizing the names of every single priest who might ever have been accused and firing priests simply on the strength of accusations is unfair and unjustified.

After the thousands of criminal changes and lawsuits against alleged abusers, we can expect to see retractors who sue their therapists and falsely accused individuals who sue their accusers and those who helped them develop the accusations. Large sums will be paid not only to those who bring the accusations but also later to those who claim they were falsely accused. It will not be a pretty sight. Apart from the lawsuits, there is the human damage. We’ve seen the names of the accused prominently featured on the front pages and airwaves before there is any sort of investigation. Cardinal Roger Mahony of Los Angeles saw his name in the headlines because of a single accusation by a 51-year-old woman who had been previously diagnosed with schizophrenia. The Los Angeles Times drew parallels between the case of Mahony and that of the late Cardinal Joseph Bernardin. In a civil lawsuit filed against Bernardin in 1993, Steven Cook, a 34-year-old seminarian, charged–on the basis of “recovered memory ” induced through hypnosis–that Bernardin had sexually abused him 17 years earlier. He sued for $10 million. The cardinal was “startled and devastated” by the accusation. I was an expert witness in that case and saw close up how dubious the memory recovery was, including the pieces brought out by a massage therapist. Eventually Cook retracted the accusation and apologized. Bernardin forgave him. Although he experienced a newfound sympathy for those falsely accused, the cardinal demonstrated a strengthened resolve to reach out to genuine victims of sexual abuse. Bernardin died of pancreatic cancer in 1996, not long after his accuser had died of AIDS. In the book that he completed 13 days before his death, he singled out his cancer and the false accusations as the “major events” of his life. Although he lived a busy life marked by enough distinguished accomplishments and good works to fill several obituaries, virtually every obituary written after his death found space to mention the allegations of sexual abuse.

Judges, jurors, attorneys, and police will almost certainly be helped by an increased understanding of human memory.

The parallel accusation against Mahony was front-page news for days. His accuser claimed that one day, 32 years earlier when she was in high school, she passed out near the band room and when she awoke her pants were off and she saw Mahony’s face. The police investigated the charge and found it groundless. A careful reader could have seen this reported in the press later the same month. What should we expect to find when his obituary is written?

The example should be a warning of the importance of keeping in mind just who we are. We’re a nation that developed a legal system based first and foremost on due process. Of course we believe that it is important to punish evildoers, but we also have to balance that with the need to protect the innocent. If we ever lose that core element of our justice system, we will lose something that will ultimately cause us a grief far greater than we have ever known. As the church scandal gains momentum, perhaps we should have a commission of respected leaders whose role it is to keep the accusations in perspective and to convince everyone to withhold judgment until the facts are in.

If knowledge about human memory were to help reduce even slightly the likelihood of wrongful accusations, the benefit for the accused and his or her extended family would be obvious. Society would also be better off, because while the wrong person is jailed, the real one is sometimes out and about committing further crimes.

But knowledge about human memory can help many others. When patients in therapy are being treated under the unsubstantiated belief that they have repressed memories of childhood trauma and that those memories must be excavated, this may not be doing the patients any good. If patients are diverted from the true cause of their problems and from seeking professional help that would actually make them better, they are harmed.

The mental health profession has also suffered from a proliferation of dubious beliefs about memory. The ridicule of a subgroup with questionable memory beliefs drags down the reputation of the entire profession. And finally, there is one last group that is harmed by a system that accepts every single claim of victimization no matter how dubious. That system dilutes and trivializes the experiences of the genuine victims and increases their suffering.

Research Universities in the New Security Environment

When our nation was attacked, we knew that the world was changing before our eyes; that terrorists were using the freedoms and openness we had taken for granted against us and that our lives would never be the same. As members of the science and technology community, we also knew that we would have key roles to play in ensuring the future safety of our country. Our nation’s scientists working in academe, industry, and government have traditionally stepped up to the plate when needed to work toward national goals, and clearly this has already begun. Even now, many are involved in small and large ways as civic scientists engaged in civic duty.

We focus our attention in this article on scientists and engineers in research universities, for we believe that facing up to new dangers will require the best of our researchers in universities in order to advance national security in all of its forms. In the coming years, however, we must keep in mind not just the science and engineering departments but the whole university, because society will need the full complement of intellectual tools to ensure our national security and well being. The public and our policymakers need to be reminded that research universities play a unique role in many areas: in educating and training students who will become the next generation of informed and engaged citizens, scholars in all disciplines, professionals and leaders in all fields, and of course, the scientists and engineers who will help us to face these tremendous challenges far into the future.

Research universities also have another critically important role to play that is not talked about nearly enough in the S&T community: to help our nation better understand the interconnectedness of the social, cultural, and religious forces that are changing our world. Scholars in our universities can provide deep understanding of some of these issues as the first step toward finding solutions to vexing problems.

These will be among the great challenges for the next generation of research universities. We can excel in science and technology and the education of scientists and engineers. And we can excel in preparing humanists and social scientists. But that is not enough. If we do no more than that, the age-old two-cultures war will rage on at a time when the stakes are simply too high for disciplinary isolation. We must be educated more broadly in order to understand the complexity of the world around us.

Research universities now have a once-in-a-generation opportunity to renew and redefine their relations with the entire society. They also have a unique opportunity to create a new partnership with the federal government to develop new programs, new areas of research, and new strategies to advance our national security and improve our society. But in order to do this, we must also be mindful of policy changes that may weaken the strengths of our current university system.

It is not surprising that in this early period of the nation’s response to terrorism, the government is focusing first on improving security measures to guard against future attacks. Strategic decisions have been made under the auspices of military, intelligence, and law-enforcement agencies. Perhaps most noticeable in our everyday life are the protections now in place in airports, but important changes are occurring in other arenas as well. Along with the potential threats that we are learning to live with are also tremendous opportunities for research and development (R&D) to help make the world a safer place.

The danger in this security-policy upheaval is that actions taken to ensure near-term security might undermine efforts to develop long-term solutions. The risk of unintended consequences is particularly great for university R&D efforts. University leaders are particularly concerned about proposed limitations on researchers’ access to data and methodologies, increasing emphasis on “missiles and medicine” in the 2003 federal R&D budget, and more aggressive tracking of foreign students in universities.

Information access. The news media have reported that in its initial attempts to assess the threat of terrorists developing harmful chemical, biological, or other agents of mass destruction, the Office of Homeland Security has expressed an interest in requesting or requiring limitations on scientific publishing, especially the publication of data sets and methodologies that might lead to the duplication of certain results. The risks and benefits of such action must be clearly understood. The shift from the current “right to know” principle to a system under which much information would be available only to those with a “need to know” threatens to erode some basic democratic principles and the basic framework of scientific interactions.

The traditions and structure of U.S. research today depend on replication and refutation, which require that sufficient data and methods be published in peer-reviewed journals. Openness has enabled the vast majority of advances in civilian applications and innovations in the past 50 or more years and makes our research system the envy of the world. An open research system has led to new knowledge and thus innovations that will continue to drive the economy, ensure national security, and fight terrorism. Open communication of results influences national policies in environmental protection and public health, and it protects against fraudulent results, sloppy science, and political biases guiding important policy decisions.

There may be some circumstances, however, that warrant restrictions, but the onus for blocking publication should be on the government through a process that is clearly defined, free of arbitrary edicts, and clearly understood by the research community. This issue calls for a new partnership between the government and research universities to set criteria and standards for any kind of restrictions on publishing of research results.

Missiles and medicine. The second significant risk is the increasing emphasis on defense and health in the FY 2003 R&D budget. Such a limited focus, at the expense of support for other fields, may have long-term consequences for research universities and the nation. Today, defense and health R&D make up more than three-quarters of the federal R&D portfolio (which totals $112 billion for FY 2003), with both sectors increasing. Research universities perform about 11 percent of the nation’s total R&D and more than half of federally funded fundamental research. The federal government funds nearly 60 percent of the R&D performed by universities, and this percentage is going down. Moreover, nearly two-thirds of federal R&D at colleges and universities comes from the National Institutes of Health, a reality that strongly influences the mix of science and engineering disciplines in their R&D portfolios. Other disciplines such as engineering and the physical sciences now account for only 15 and 9 percent respectively of the total university R&D portfolio, far smaller shares than in past years.

These kinds of imbalances mean that our universities might not be training the right mix of scientists and engineers and other scholars that we will need to bolster national security and economic growth in many areas in the next generation. Pointing out this imbalance is not to suggest that there should be less funding for health R&D; rather, there should be more nondefense R&D in many other disciplines. Perhaps it is even time for the science and technology community to help out its other research colleagues and call for increases in federal funding for specific areas in the humanities. The FY 2003 budget request for the National Endowment for the Humanities was a paltry $127 million, of which less than 10 percent goes toward research.

The danger in this security-policy upheaval is that actions taken to ensure near-term security might undermine efforts to develop long-term solutions.

Although a missiles and medicine approach is an understandable response for a counterterrorism agenda, it is only a beginning. There is no question that the S&T community must provide leadership in research areas that lead to threat reduction. A comprehensive R&D agenda will require investments in many other areas and a better balance to encourage new ideas. Although we cannot predict how rapid shifts in funding priority may directly affect the economy or national security in the short term, we can assert that any dramatic decreases in funding in some areas and resulting imbalances of research across disciplines are likely to have negative effects on the kind of research done in universities, as well as the kind of training scientists and engineers receive. Such redistribution needs to be carefully considered.

A new partnership between the government and research universities could identify important and significant foci for research and innovation. We need to develop forums and engage in serious discussions with our government leaders to make the case for carefully considered research priorities. By working together, we can avoid pouring money and people into some areas simply because things can be done but miss defining what needs to be done.

Tracking foreign students. The U.S. government is concerned that potential terrorists may pose as students in order to enter the country. As a consequence, the Homeland Presidential Directive states that, “The government shall implement measures to end the abuse of student visas and prohibit certain international students from receiving education and training in sensitive areas, including areas of study with direct application to the development and use of weapons of mass destruction.”

According to the State Department’s Mantis list, a procedure designed for all government programs to ensure several security objectives, the “sensitive” areas could include nuclear technology, missile technology, navigation and guidance control, chemical and biotechnology engineering, remote imaging and reconnaissance, advanced computer/microelectronic technology, materials technology, information security, lasers and directed energy systems, sensors, marine technology, robotics, advanced ceramics, and high-performance metals and alloys. This list describes a large portion of the research and graduate education portfolio in science and engineering in the nation’s leading universities. If implemented without careful consideration, this policy will be risky to our national security for many reasons.

At least since World War II, the United States has prided itself on being a magnet for the brightest students from around the world. Foreign students earned 9.9 percent of bachelor’s degrees, 19.9 percent of master’s degrees, and 27 percent of doctorates in the United States in 1999. In engineering, foreign-born Ph.D.s comprised 45 percent of the total. Foreign students earned 46 percent of doctorates in computer sciences and 31 percent in mathematics. After studying here, many of these students stay and make significant contributions to the economy. This pool of talent is distributed across academe, industry, and all levels of government. In academe, for example, foreign-born Ph.D. holders comprised 28 percent of the scientists and engineers. Others return to their home countries and make positive contributions there. The United States benefits from the work of those who stay and from the close relationships with those who return home.

The data show that our native-born students either are not sufficiently interested in or are not being inspired to pursue science, engineering, and mathematics degrees. If access to foreign students is blocked, who is going to do the research of the future, and who will teach in the universities? This is a serious question that has direct consequences for the nation’s long-term national economic security. If the United States decides to restrict access to foreign students, it must develop new policies immediately to prevent a net loss of science and engineering personnel at all levels in the next generation. There can and should be more such interactions with government leaders to reach solutions that serve national needs. Again, working in partnership with government, educators must consider the options and develop some new policies to manage this serious risk to our future national security.

U.S. leaders have recognized that something must be done to better ensure that holders of student visas are actually studying in educational institutions, and partnerships are starting to emerge. After some initial difficulties, the higher education community responded positively and worked closely with Sen. Dianne Feinstein (D-Calif.) and other public officials to address the vulnerabilities in the nation’s student visa program. The higher education community is now meeting regularly with the Immigration and Naturalization Service, which is implementing the Student Exchange Visitor Information System (SEVIS).

Proposal for action

Implementing SEVIS efficiently and effectively, however, is not the solution to our increasing reliance on foreign-born researchers and technical workers. We need to inspire more young Americans to pursue careers in science and engineering. We therefore propose a contemporary version of the National Defense Education Act (NDEA) that would be responsive to the current challenges we face.

The NDEA, which provided significant financial assistance to U.S. students pursuing graduate degrees, directly resulted from an increase in the perceived risk to national security that occurred after the launch of Sputnik in October 1957. NDEA marked a change in national science policy in response to national security concerns, and it increased support for large numbers of students who became scientists and engineers from the late 1950s throughout the 1970s. As rocket scientist Wernher von Braun noted in congressional testimony at the time, the challenges “require a new kind of soldier, who may one day be memorialized as the man with the slide rule…It is vital to the national interest that we increase the output of scientific and technical personnel.”

One result of the federal actions that followed was a rise in Ph.D.s awarded annually by U.S. colleges and universities from 8,600 in 1957 to 34,000 in 1973. The careers of many leaders of today’s scientific community were launched in part or in whole by NDEA support. The nation needs a program geared to current challenges and conditions that can yield comparable results.

There may, however, be an even more compelling reason for a new federal initiative to draw U.S. students into science and engineering. Our homeland defense and national security needs should motivate us to tap into the large pool of women and minorities who have been underrepresented in science and engineering. Some critics of the research university argue that the encouragement of foreign students to enroll in U.S. graduate and research programs reflects the nation’s unwillingness to provide significant incentives to our own young people, especially women and minorities, to become serious about science and engineering careers. They argue that as a nation, and as research universities, we are unwilling to spend the needed resources to prepare, recruit, and then support all kinds of students to pursue these careers.

A contemporary federal initiative that would be consistent with the president’s call for national service could finally tap into that huge sector of the population that has not been readily welcomed before. A governmental call would draw on our most talented young people from all sectors of society to explore areas of scholarship that are important to national security: all fields of science and technology, security and intelligence, defense, foreign relations, and economic development. Our nation needs well-educated students in all of these areas.

In recruiting young people to fill clearly identified roles in the national interest, we must be careful not to lose sight of the broader purpose of education. In Consilience: The Unity of Knowledge, E.O. Wilson provides his vision of what education should achieve in the next generation of students: “Every college student should be able to answer the following question: What is the relation between science and the humanities, and how important is it for human welfare? Most of the issues that vex humanity daily–economic conflict, arms escalation, overpopulation, abortion, environment, poverty–cannot be solved without integrating knowledge from the natural sciences with that of the social sciences and humanities. Only fluency across the boundaries will provide a clear view of the world as it really is.” Preparing this truly well educated student is our most risky business in the next generation, and we must step up to the challenge now. This is, therefore, yet another important area that should be discussed within the framework of a new partnership between government and research universities.

We are reminded of H.L. Mencken’s famous quote: “For every complex problem, there is a solution that is simple, neat, and wrong.” Balancing long-term and short-term needs, attending to national and global interests, and integrating science, engineering, social science, and humanities in a dangerous and rapidly changing world will not be easy. That is why we emphasize the need for close collaboration between government and universities. Negotiating this obstacle course will require long discussions among the wisest members of both communities. The path may be complex and messy, but at least it has a fighting chance of being right.

Although we recognize that the translation of goals into policies is never straightforward, we also think it is essential to have clear goals in mind as we move forward. We believe that the scientific and technological creativity and innovation that support our national security in so many ways–economic, military, health, environment, education–will advance if we adhere to three broad goals:

  • The free flow of information should not be restricted without considerable deliberation and acknowledgement of risks to the overall R&D effort.
  • The nation’s research priorities are broad enough to enable exploration and discovery into new areas from which unanticipated benefits may be derived.
  • We must create pathways for U.S. students as well as the brightest foreign students to study and succeed in our research universities.

Our universities have much to offer in this new globally oriented world. The university research community, as well as our government, must work together to address these new challenges constructively. If we do not, we will become a weaker country, and we will have allowed the terrorists to progress. If we are able to work together in ways that are respectful of each other’s needs and strengths, we will emerge as a much stronger country, as well as a much better country.

Science’s Role in Natural Resource Decisions

The call for land management and regulatory agencies to center their decision processes on “sound science” or “good science” has become a kind of mantra, so that no speech or directive about natural resource decisionmaking is any longer thought to be complete without some recourse to these magic words. At a House Natural Resource Committee hearing on February 5, 2002, for example, the committee’s chairman, Rep. James Hansen (R-Utah), urged reform of the Endangered Species Act (ESA) so that it is grounded “in sound science, not political ideology.” Such examples could be cited endlessly; the challenge would be to find a policy proclamation that does not contain such a reference. Perhaps the capstone of this phenomenon was provided by the Clinton administration when it decided to appoint a citizens committee to recommend changes in the Forest Service’s planning regulations. Rather than entrust the task to planners or public administrators, the secretary of Agriculture appointed a “committee of scientists.” If we expect scientists to have some privileged understanding of planning regulations, it is hardly surprising that we consistently invoke good science as the sole reliable path to sound resource decisions.

But this invocation has become as problematic as it is ubiquitous. In fact, almost every time someone calls for centering some policy or decision on sound science, we simply compound the problem. And we will continue to compound it until we begin to recognize that we are still using a century-old and increasingly outdated view of the relationship between science and natural resource management. This nexus was woven into the very fabric of public policy, and especially of resource policy, by the Progressive movement at the turn of the last century.

The Progressives believed that science could and should transform public policy as thoroughly as it had already transformed physical existence. The hard certainties that science produced could now begin to replace the notorious uncertainties so often produced by politics. City government, for example, would be transformed by replacing elected mayors [who made decisions in the old-fashioned, messy (if not actually corrupt) political way] with professional managers who would apply political science to city problems. Replacing traditional decision processes in an embedded context like city hall presented a much greater challenge than the brand new arena of resource management, where the scientific approach had the whole field to itself. Under the aggressive leadership of people like Gifford Pinchot, Progressivism entered that field with a vengeance. The Forest Service, for example, was born under that star; the agency built its identity and based its very substantial institutional pride on its commitment to professional, science-centered resource management.

Now, a century later, politicians and others are repeatedly urging land and resource management agencies to put even more weight on the old Progressive model. That is precisely what Hansen was doing when he criticized the ESA. It was what Undersecretary of Agriculture Mark Rey had done in a milder form when he declared, during a presentation of the Forest Service budget to a Senate committee on February 12, 2002, that, “the budget underscores the Forest Service as a science-based organization.” Pinchot could have used language like that a century ago, and the science of the day would have given credibility to his assertion. But science itself has not stood still in the intervening century. As the 20th century progressed, the radical predictability of Newtonian physics (upon which the Progressive faith so largely rested) began to be assaulted by the equally radical unpredictability first identified as a principle of quantum physics. Although there remained, of course, a vast range of highly predictable phenomena, much of the universe now had to be understood as inherently impossible to forecast. Yet, even though science now presents us with a fundamentally different view of the world than that of a century ago, our expectations of the role science should play in land and resource management have not kept pace.

This problem can be illustrated by focusing on the role that science is expected to play in ecosystem management. The ubiquitous invocation of good science as the lodestar for ecosystem management decisions rests on the assumption that we can ever know enough to “manage” ecosystems. In fact, as science itself has taught us, ecosystems are inherently too complex to be known, let alone managed in that way. Because of this complexity, there is always more that could be known about any given species, habitat, or natural system. So why would anyone continue to speak and act as if good science by itself could get to the bottom of these bottomless phenomena and in the process give us “the answer” to difficult natural resource issues? In large part this is simply a holdover of an anachronistic view of how the world works and of what science can tell us about that world. In this sense, the repeated invocation of good science as the key to resolving complex ecosystem problems has itself become bad science. What is infinitely worse is that this bad science is all too readily made the servant of bad government.

Disingenuous appeals

The appeal to good science is often only a way of using the unfathomable complexity of natural systems to forestall or undermine a decision that some group or individual opposes. The basic line of reasoning is that “we don’t know enough yet to make this decision.” Within limits, such circumspection is a valuable element of any good decision process. But it is easily perverted into saying, in effect: “Because there is more that could be known about this subject, we should not make any decisions until we know everything.” But if by good science we mean knowing everything that can be known about a given issue, then this appeal to good science is not only bad science (because there is always more that can be known about any genuinely complex system), but it is also bad governance (because decisions do in fact have to be made in real time, and if science cannot make them for us, we need to stop pretending that it can).

Rey summed up the problem when he testified before a congressional committee on March 6, 2002. He talked about “a myth that has grown up in the midst of natural resources decisionmaking.” The myth, he said, is that “good science can, by itself, somehow make difficult natural resource decisions for us and relieve us of the necessity to engage in the hard work of democratic deliberations that must finally shoulder the weight of those decisions.” Rey’s warning goes to the heart of the matter, but we are still a long way from eradicating the myth. In fact, nearly every new invocation of sound science or good science only compounds the problem. Meanwhile, however, a seemingly unrelated phenomenon in the field of land and resource management provides an opportunity to realign the relationship of science and policy in a way that is more consistent with new scientific understandings. That phenomenon is sometimes referred to as the collaboration movement.

It is a myth that good science can, by itself, somehow make difficult natural resource decisions for us.

For a more than a decade now, the American West has been the scene for a steadily growing number of local agreements among western environmentalists, ranchers, loggers, miners, and recreationists about how the public land and natural resources should be managed in their river drainage or ecosystem. The list of such local collaborative efforts is now growing too fast to be catalogued, but the work of groups such as the Henry’s Fork Watershed Council, the Quincy Library Group, the Willapa Alliance, the Malpai Borderlands Group, and the Applegate Partnership are beginning to add up to something with genuinely historic proportions. A steadily expanding number of westerners on both sides of the political fence now believe that they can produce better results for their communities and their ecosystems by working together to solve resource problems than by continuing to rely on the adversarial and increasingly dysfunctional mechanisms of the existing decision structure.

There are two ways in which collaboration creates a radical change in the way science is brought to bear on natural resource decisions. First is the crucial role of local knowledge in every collaborative effort. Effective collaborators are, nearly without exception, longtime inhabitants of the ecosystems in which they are collaborating. They know those ecosystems in a variety of ways, all arising from their years or even generations of having lived with their complexities. This ingrained knowledge is not incidental to the process of collaboration; it is essential to it. It provides a way of knowing the ecosystem that an appeal to objective, external, expert science simply cannot supply.

One example from the dozens that could be chosen will illustrate the crucial role of local knowledge in collaborative work. In 1992, while developing a basin-wide water management plan for the Clark Fork River in Montana, the Clark Fork Basin Committee went on a series of field trips to become familiar with the range of water uses in the basin. As one observer, Donald Snow, put it, “Biologists, irrigators, mineral processors, and others were able to inform steering committee members of the organizations’ interests in the river. A lot of native wisdom came forth–the kind of knowledge gathered up by people on the land, people whose livelihoods depend directly on the river.” In one crucial conversation, the observations of local rancher and water guru Eugene Manley brought into play specific and highly reliable information, garnered during more than 70 years of living in the valley, that helped the group understand the timing required to maximize both instream flows and the needs of irrigation. It would be difficult to exaggerate the central role that such ingrained knowledge plays, time and again, in enabling longtime adversaries to discover a common base of factual understanding on which they can then develop innovative and sustainable management decisions.

Rejecting conflict

The role of local knowledge in making collaboration work leads to the second way in which collaboration contributes to a new positioning of science within natural resource decisionmaking. Collaboration has arisen and spread because it offers an alternative to the highly adversarial form of public involvement that now dominates almost all public decision processes. An integral part of that approach has been adversarial science. Each side in any contentious resource issue hires as many scientists as it needs or can afford and puts their conclusions in the record. The resulting image of science for sale creates deep public cynicism about scientists, of course, but it also corrodes confidence in the decisionmaking process itself. How can lay people, either citizens or officials, possibly hope to know what is right for their ecosystems when scientists cannot even agree about it? This leads either to alienation from public life altogether or to one more spurious invocation of good science to save democracy from this quandary.

Collaboration slices through this Gordian knot in a totally unexpected way. Rejecting the adversarial approach to decisionmaking, it necessarily rejects the use of adversarial science as well. Collaborators begin by determining what they already know about their ecosystem on the basis of their local knowledge. They then agree on what they don’t know but need to know in order to make wise and sustainable decisions about their ecosystem. The need to know is the crucial element here. What they don’t know about their ecosystem is infinite, and therefore in a sense irrelevant. Collaboration works when diverse interests can agree on what portion of that infinity they need to explore. Even more important, collaboration works when opposing interests can agree on the specific scientists or scientific procedures that can give them reliable information to fill in the relevant gaps in local knowledge. This move rescues science from its adversarial perversion while enabling it to play a role that is actually within its grasp: providing reasonably reliable information about a reasonably determined set of ecosystem parameters. Without that consensual determination of the questions science is expected to answer, we continually set science up by expecting it to give us the answers without having done the civic work of first deciding what the questions are.

There are some glimmers of hope (Rey’s testimony to the Senate committee is one example) that this more mature understanding of the role of science in resource decisions is spreading from local collaborative settings to higher policy circles. A constructive next step would be for agency leaders and elected officials to begin conscientiously resisting the temptation to appeal to good science as a shortcut to decisions that can only be made by democratic deliberation.

The problem, of course, is deeper than rhetoric, but a heightened awareness of the language used to describe the role of science within a democratic decisionmaking system would go a long way toward dissolving the myth. Beyond that, a whole new training framework needs to be thought through and implemented to help public officials and agency personnel at all levels understand more clearly the emerging role of local knowledge in collaborative decisionmaking. By implication, that training should also emphasize the still often crucial but far more realistically modest role of hard science in collaborative settings.

One of the greatest challenges is to rethink and reposition the role of science in the National Environmental Policy Act and other decision processes in such a way that the adversarial use of science is minimized, the recourse to local knowledge is emphasized, and science and scientists are routinely called on to fill consensually identified information gaps.

Finally, schools of natural resources and public administration should be encouraged to incorporate into their curricula a rigorous exploration of the changing role of science in natural resource decisionmaking and management. There is clearly much to be learned about that changing relationship, and it should be a refreshing change for both theorists and practitioners to move beyond the myth of scientific omnipotence that has so clearly outlived its usefulness.

Reducing Mercury Pollution from Electric Power Plants

The majority of electricity in the United States is produced by power plants that burn coal, with 464 such plants producing 56 percent of all electricity. But these power plants also are the nation’s single biggest source of mercury pollution. Each year, the plants spew a total of 48 tons of mercury into the atmosphere–roughly a third of all human-generated mercury emissions. There is sound evidence that mercury emissions from coal-burning power plants can, in fairly short order, be cut dramatically and cost-efficiently. Yet plans to curtail emissions of this hazardous pollutant have become enmeshed in an intense squabble as politicians and regulators debate the specific regulatory framework to be implemented.

The Environmental Protection Agency (EPA), which is required under the Clean Air Act to regulate hazardous air pollutants, is developing regulations that would require reducing mercury emissions by up to 90 percent in 2007. However, the Bush administration now is asking Congress to pass legislation requiring less stringent mercury reductions and spreading the reductions over a much longer time. In order to stave off this push, Sen. James Jeffords (I-Vt.), chair of the Senate Environment and Public Works Committee, has introduced his own legislation to codify the 90 percent reduction levels by 2007, and he has indicated that passage of this bill is his top priority. Given the significant threats that mercury pollution poses to human health and the environment, along with the recent strides made in improving emission control technologies, the wisdom of following Sen. Jeffords’s lead is compelling.

When coal is burned in power plants, the trace amount of mercury that it contains passes along with the flue gas into the atmosphere. The mercury eventually falls back to earth in rain, snow, or as dry particles, either locally or sometimes hundreds of miles distant. According to data from mercury monitoring stations nationwide, the highest deposition rates occur in the southern Great Lakes, the Ohio Valley, the Northeast, and scattered areas in the Southeast; basically, in areas around and downwind of coal-fired power plants.

Once the mercury is deposited on land or in water, bacteria often act to change the metal into an organic form, called methylmercury, that easily enters the food chain and “bioaccumulates.” At the upper reaches of the food chain, some fish and other predators end up with mercury levels more than a million times higher than those in the surrounding environment. For the humans and wildlife that ultimately consume these species, these concentrations can be poisonous.

In the United States, the primary source of mercury exposure among humans is through consumption of contaminated fish. Women who are pregnant or may become pregnant, nursing mothers, and children are the populations of greatest concern. When a pregnant woman ingests mercury, it is easily absorbed by her blood and tissues and readily passes to the developing fetus, where it may cause neurotoxicity (damage to the brain or nervous system). This damage eventually may lead to developmental neurological disorders, such as cerebral palsy, delayed onset of walking and talking, and learning disabilities. Approximately 60,000 children may be born in the United States each year with neurological problems due to mercury exposure in the womb, according to a 2000 report by the National Research Council. Even after birth, young children who ingest mercury, from either breast milk or contaminated foods, remain especially susceptible to the pollutant’s neurotoxic effects, because their brains are still in a period of rapid development.

To help protect the public against such potential dangers, the Food and Drug Administration (FDA), which regulates commercially sold fish and seafood, issued an advisory in 2001 for those groups of people deemed most at risk. The advisory recommended that these populations avoid eating swordfish, shark, king mackerel, and tilefish, and that they limit their consumption of other seafood to an average of 12 ounces per week. Concurrently, EPA issued a recommendation that sensitive populations limit their intake of freshwater fish to one meal per week, with adults limiting their total weekly consumption to 6 ounces and children to 2 ounces. States have taken action as well, with 41 states now advising residents to limit consumption of certain species of fish. Although all fish contain some levels of mercury, states generally advise residents to limit their consumption of those species, such as bass, northern pike, walleye, and lake trout, that prey on other fish.

There is disagreement, however, about which set of recommendations will provide the best measure of safety. Some groups maintain that EPA’s approach is generally more protective than is FDA’s, and some also have accused FDA of catering to the tuna industry by not adding this species to its fish advisory. FDA recently announced that its Foods Advisory Committee will reexamine its fish consumption advisory and issues surrounding mercury in commercial seafood. But even as this particular debate continues, it remains clear that, above all, adequate steps are needed to reduce the amount of mercury emitted into the environment in the first place.

Seeking satisfactory standards

The Clean Air Act Amendments, passed in 1990, require that EPA establish emission standards for the major sources of 188 different hazardous air pollutants, including mercury. These standards must require the maximum degree of emission reductions that EPA determines to be achievable, and hence are known as Maximum Achievable Control Technology (MACT) standards. EPA already has set MACT standards for several major sources of mercury emissions. For incinerators used to burn municipal wastes and to destroy medical wastes, EPA has established standards that will reduce their mercury emissions by 90 percent and 94 percent, respectively. Similar standards also have been proposed for hazardous waste incinerators.

Utilities are the last major source of unregulated mercury emissions. The industry secured congressional exemptions from the MACT standards until EPA conducted a number of studies on mercury’s sources and health effects. The studies concluded, among other things, that out of 67 toxic air pollutants emitted from coal-fired power plants, mercury was of greatest concern. Armed with these data and working under a deadline imposed by a federal court, EPA announced a plan to propose regulations for utility mercury emissions by 2003, finalize them in 2004, and require actual mercury reductions in 2007. Based on data already collected from analyses of coal-fired boilers, EPA has estimated that up to 90 percent reductions may be required under the MACT standard.

But as EPA was moving ahead, the Bush administration stepped in. On February 14, 2002, the administration proposed its “Clear Skies Initiative,” which would reduce power plant emissions by only 46 percent in 2010 and 69 percent in 2018, rather than the 90 percent reduction in 2007 under a MACT standard. Because this proposal requires congressional action to become law, the administration is looking for an influential member of Congress to introduce it.

In response, numerous members of both parties in the Senate and House have called on the administration to continue developing strict MACT standards and to strengthen its legislative proposal for mercury. Their advice is sound, on both technical and economic grounds.

Technology available

Even though they are not yet required to reduce mercury emissions, utilities already have removed 35 percent of the mercury from the coal they burn, without really trying. This is because many of the pollution control technologies installed on power plants to remove nitrogen oxides (NOx), sulfur dioxide (SO2), and particulates also are removing mercury from the flue gas. With new regulations for NOx, SO2, and particulates expected in the near future, the industry’s incidental mercury capture rate is expected to increase further as additional controls for these pollutants are installed. EPA estimates that 46 percent of mercury emissions can be reduced by 2010 in this manner–exactly the level of reduction called for in the administration’s Clear Skies Initiative. It would seem, then, that this proposal is not calling for much extra effort on the part of utilities.

Indeed, some combinations of existing pollution control technologies have achieved more than 98 percent mercury reductions at individual power plants. Of course, attaining consistent 90 percent mercury reductions across the industry, the level proposed by Sen. Jeffords and under EPA estimates, will be much more difficult than relying completely on other regulations and the control technologies they require. To help reach this goal, the Department of Energy (DOE) has partnered with eight groups of utilities and entrepreneurs to fund mercury control projects on actual power plants. The basic strategy of these ventures is to find new ways to enhance the ability of existing control technologies to capture mercury. Through this program, DOE hopes to develop control options that are cost-effective and can reliably reduce mercury emissions by 50 to 70 percent by 2005, and by 90 percent by 2010. On the basis of preliminary results, DOE believes that it will meet the first goal this year, and although DOE’s second goal of reaching 90 percent reduction by 2010 is three years after EPA’s target date, the developers of the technology being tested, as well as other entrepreneurs in the field, believe that they will exceed this goal as well.

There is sound evidence that mercury emissions from coal-burning power plants can, in fairly short order, be cut dramatically and cost-efficiently.

Utilities sometimes argue that these reduction levels will be more difficult to reach using certain types of coal. For example, mercury from subbituminous coal, common in the western states, is difficult to control because it exists mostly in the elemental form in flue gas. But some utilities that burn subbituminous coal already have achieved approximately 75 percent reductions using existing control equipment, and a number of new technologies are being developed that can reduce mercury from such coal as effectively as from bituminous coal. It also should be noted that EPA has considered having different requirements for different types of coal under the MACT standards being developed. Even under this scenario, EPA calculated that 43 tons of mercury emissions could be reduced overall, which is still a 90 percent reduction from the current total.

Another obvious concern for utilities is the cost of control measures. Today, the most well-developed option for controlling mercury emissions is called “activated carbon injection,” a technology that has been used in incinerators for years. According to recent EPA estimates, use of this technology in power plants today would cost only fractions of a penny per kilowatt hour of electricity produced: a cost roughly the same as for technologies currently used to reduce NOx emissions. Although mercury and NOx pollution pose different health and environmental effects, it would be hard to argue that mercury is less important to mitigate. Also, because NOx regulations did not have a significant effect on consumer prices for electricity, it is not expected that mercury regulations will do so either.

Moreover, it is reasonable to assume that new mercury control technologies now being developed will be even less expensive. DOE’s stated goal is to produce technologies that, by 2010, will be 50 to 75 percent cheaper than today’s versions. Also, the Electric Power Research Institute currently is evaluating more than a thousand potential processes and sorbent materials for mercury control, and many of these already appear less expensive than using activated carbon. Finally, once regulations are set, control technology costs almost always go down as more entrepreneurs enter the business and more capital is expended in R&D. For example, the projected costs of the Clean Air Act’s Acid Rain Program, a regulatory program for SO2 and NOx, fell by two-thirds between 1989 and 1997.

Utilities also express concern about some possible unintended effects of removing mercury from flue gas. For example, utilities now recycle some of the wastes from coal-fired boilers into useful products, such as wallboard, cement, and fertilizer, that are sold to help offset operating costs. The remaining wastes typically are put into landfills. Both options rest on the fact that today’s wastes contain very low levels of mercury. However, future control regulations likely will result in additional levels of mercury in the wastes. Although some observers believe that this minute addition of mercury (which will be in a solid, stable state) will not change the characteristics of the wastes or affect any byproducts produced from them, others are concerned that mercury might escape into the environment through water leaching or volatilization. Future wastes also will probably contain more activated carbon (one of the substances used to remove mercury), and there is some concern that this increase may render certain byproducts, such as cement, unmarketable. EPA, DOE, and others are looking into these issues to determine whether current practices can continue.

Another controversial issue to be addressed is whether the mercury control program eventually adopted should allow utilities to trade mercury credits among facilities. Under a trading program, a power plant could continue to emit high levels of mercury by buying credits from a plant that reduced mercury emissions beyond EPA’s requirements. Most stakeholders support trading schemes for pollutants such as SO2 and NOx. But environmentalists and various community groups think that trading is inappropriate for mercury. They believe mercury to have greater health and environmental effects at the local level than do other pollutants, and thus they think trading would lead to the formation of “hot spots” of contamination around dirty power plants. Answering this question definitively will require more research on mercury’s fate once released into the environment. But it appears that there is some justification for treating mercury differently from other pollutants by ensuring that all power plants make significant cuts in their emissions of mercury. This idea is further confirmed by the Clean Air Act itself, under which trading is prohibited for hazardous air pollutants, such as mercury, that are regulated under the MACT program. Sen. Jeffords’s proposed legislation also would prohibit mercury trading, whereas the administration’s proposal would allow it.

With all these various forces at work, determining a solution to the mercury problem will not be easy, and members of Congress will have to consider a number of issues as they decide how to proceed. Fortunately, even if Congress fails to pass legislation to address mercury emissions, EPA still will be required to propose MACT standards for power plants by December 2003. Many observers believe that this route actually will be more effective in protecting human health, since it has been used successfully to regulate other hazardous air pollutants listed in the Clean Air Act. However, in light of the expected effort by the Bush administration to weaken EPA’s position, the safest way to ensure swift and decisive action is for Congress to pass legislation calling for a 90 percent reduction in mercury emissions in 2007. Such action will protect the long-term health and well being of the nation’s lakes, streams, wildlife, and–most important–its people.

The Technology Assessment Approach to Climate Change

Policy debate on global climate change is deadlocked. Why? One major reason is that assessment of options for reducing greenhouse gases has been strikingly ineffective. The Intergovernmental Panel on Climate Change (IPCC), which produces respected and successful assessments of atmospheric science, has applied the same approach to the fundamentally different problem of assessing technological and managerial options to reduce emissions. The predictable result has been options assessments that are broad, vague, and disconnected from practical problems. One reason is that IPCC has, crucially, failed to draw on private-sector expertise. Yet such expertise could inform policy and promote emission reductions directly, as one prominent recent success demonstrates. That notable success is the assessment of technological options to reduce ozone-depleting chemicals under the Montreal Protocol. An assessment process similar to that used for ozone-depleting chemicals can be applied to problems of mitigating greenhouse gas emissions and may represent the best near-term opportunity to ease the present policy deadlock.

The sharpest debate over climate change has concerned how to respond to uncertainties in climate science, such as the significance of recent climate trends, their attribution to human influences, and climate model projections of future changes and their impacts. But these are not the only uncertainties that matter. Equally important are uncertainties over future greenhouse gas emissions and their control. How fast will emissions grow if unchecked? How much can they be reduced, by what means, at what cost? The deadlock persists, and climate science uncertainties matter, because of widespread concern that emission cuts will generate serious economic and social costs. If it became clear that cutting emissions was cheap and easy, the present deadlock would yield readily to agreement on large precautionary cuts, despite uncertainties in climate projections.

But future emissions and the ease with which they can be reduced are much more uncertain than the present debate would suggest. Under plausible assumptions about socioeconomic and technological change, global emissions in 2100 could range from half to 10 times present levels. This uncertainty stems from imperfectly understood demographic, behavioral, and economic processes. Technological change is an important component of the equation, too. Even leaving aside the possibility of fundamental technological advances, there are many incremental innovations that can reduce future emissions substantially. These include measures to increase the efficiency of energy use, reduce the carbon content of primary energy, decouple atmospheric emissions from fossil energy use, and target non-CO2 greenhouse gases from industrial and agricultural activities. Expert assessments of such options can reduce uncertainty about the cost of limiting emissions and provide useful input to policy decisions. Yet several assessments of greenhouse gas reduction options have achieved little, either in reducing uncertainties or in providing useful policy advice.

This failure reflects no special discredit on the IPCC. Many attempts to assess options for managing other environmental issues have similarly failed because of a basic structural problem that all such attempts face. Successful assessment requires the energetic and honest efforts of first-rank experts from the industries that are potential targets of regulatory controls. But these people’s time and attention are among their companies’ most valuable competitive assets. Releasing them to help advise public policy is costly under any conditions. Releasing them to help formulate regulatory restrictions on their own companies is even less attractive. No company or industry has an interest in helping regulators to impose burdens on them.

The record attests to the force of this obstacle: Options assessments are attempted infrequently, and succeed even less frequently at engaging industry expertise. When private interests do get involved–typically when an issue’s political salience makes it risky for firms not to participate–their recommendations usually follow one of two patterns. Most often they are so vague, abstract, and qualified that they provide no useful policy guidance. In other cases, they provide a forceful defense of the status quo, arguing that changes in current products or practices would be costly, difficult, or futile, or would lead to health and environmental costs as bad as those they avoid.

One striking exception to this pattern is the assessment of technological options to reduce ozone-depleting chemicals under the Montreal Protocol. This treaty, the centerpiece of the ozone-layer regime, is the most conspicuous success yet in managing any international environmental issue. The ozone regime enjoys nearly universal participation and has reduced ozone-depleting chemicals by 95 percent (and still growing) in 15 years. This success was not achieved by the control measures in the original treaty. Instead, it was achieved by the rapid adaptation of the controls and the flood of innovations that followed. The protocol’s novel process of assessing alternatives to ozone-depleting chemicals was central to this adaptation. Where so many prior attempts had failed, it consistently drew in industry experts who provided high-quality technical advice and spurred development and adoption of measures to reduce chemical use. These linked processes of assessment, innovation, and diffusion were so powerful they almost made the regulations appear superfluous, as private reduction efforts stayed consistently ahead of regulatory requirements.

Keys to success

This success was not due to uniquely benign characteristics of the ozone issue. Indeed, the Montreal Protocol was achieved only after 10 years of policy deadlock that included several unsuccessful attempts to assess technological options. The most serious efforts were two 1979 studies, one by the Rand Corporation and one by a National Academy of Sciences (NAS) committee. These studies included industry surveys and interviews and, in the NAS study, a few industry experts as participants. Expert views at the time diverged widely, yet both studies reinforced the industry position that chlorofluorocarbon (CFC) cuts would be difficult, costly, and dangerous. They concluded that the maximum reduction in U.S. CFC use achievable at any price was 25 percent (Rand) to 50 percent (NAS). Proponents of CFC reductions could not demonstrate that extensive reductions were feasible, because they lacked the authoritative technical knowledge to rebut industry claims.

The ozone regime overcame this blockage, providing a powerful example of effective assessment of technological options. Yet, for climate change and other issues, this example has been ignored. A technology assessment panel was one of four independent expert panels (on atmospheric science, the effects of ozone loss, technology, and economics) established by the 1987 Montreal Protocol to review new results and advise the parties’ periodic reviews of control measures. Because they were organized in some haste late in 1988 in response to pressure for tightening the protocol, the panels had a lot of freedom. They were permitted to choose participants, carry out their work, and prepare reports to the parties with little political oversight–independence that greatly enhanced their effectiveness.

Organizers of the technology panel decided quickly that the expertise needed to do their job resided principally with the private sector, so they adopted an organization and procedures substantially different from those of the other panels to make it easy for private-sector experts to participate. They organized in separate workgroups for each major type of ozone-depleting chemical, such as refrigerants, solvents, foams, and aerosols. Teams of experts evaluated the potential of specific technologies and operational changes that might reduce chemical use in specific applications. Participants came mostly from companies using the chemicals, but also from industry associations, governments, universities, and nongovernmental organizations (NGOs). Experts from companies producing CFCs were at first excluded, a contentious decision that reflected negotiators’ mistrust of these firms for their long history of obstruction. This decision was reversed in 1990, after the first assessment was completed.

The Technology Panel, which after 1990 became the Technology and Economics Assessment Panel (TEAP), was strikingly successful. In four full assessments and many smaller tasks, it presented a huge number of specific technical judgments that were, with few exceptions, persuasive, technically supported, and consensual. It frequently reported that reductions could be made further and faster than previously believed, judgments that usually proved to be accurate or even somewhat conservative. TEAP carefully avoided usurping the parties’ authority, but its specific, carefully delimited statements of feasible reductions repeatedly exercised strong influence over the rate and direction of parties’ decisions. Even when parties did not precisely follow TEAP’s judgments of maximum feasible reductions, policy actors disputed or criticized TEAP’s conclusions only rarely and accused them about as often of being too timid as too bold. One measure of the parties’ approval was their repeated requests for TEAP to take on additional jobs.

Motivating private-sector participation is one basic challenge of technology assessment. Keeping the process credible is the other.

Just as the normal failures of technology assessment reflect inadequate private-sector expertise, TEAP’s effectiveness reflected its success at eliciting the serious, honest, and energetic participation of first-rank industry experts in the service of the regime’s environmental goals. Many factors helped attract these participants, including managerial initiatives to keep the process efficient and goal-directed. But the fundamental reason why private-sector experts came and their employers agreed to send them, was that the process provided private benefits to participating companies and individuals.

These private benefits were of several types. The first was help in meeting the companies’ urgent need to reduce ozone-depleting chemicals to comply with current and anticipated regulatory targets. It was crucial for TEAP’s success that it started work shortly after the 1987 protocol had adopted 50 percent cuts in CFC use. These cuts posed a serious threat to users in those countries, including the United States, that no longer used CFCs in aerosol sprays, the one large use that was easy to replace. Widespread calls to further tighten targets sharpened this threat, making users want to reduce dependence on all ozone-depleting chemicals as rapidly as possible. TEAP’s working groups assembled critical masses of experts, with antitrust protection, to evaluate reduction options in each specific usage area, a problem-solving capacity greater than even the largest firms could deploy by themselves.

This help in managing the business risk of regulations was the most important private benefit to participants, particularly for the firms most dependent on CFCs and particularly in the early years of the regime. But it was not the only benefit. Participants also gained current detailed information about the transition from ozone-depleting chemicals, about which chemicals and uses posed greater and lesser challenges, and about the contributions of various types of technology and expertise. This information had substantial commercial value. It helped participants project market trends and identify new opportunities to sell products and services related to the transition. Individual participants also benefited from the professional challenge and prestige of the process. This was a self-reinforcing benefit, since the success and reputation of the process depended in turn on the stature of the experts participating. Because the first-rank experts on many topics worked for competing firms, TEAP gave them an opportunity to work intensively with and gain the respect of an elite group of peers that their normal professional lives did not offer.

The work of TEAP and its sectoral sub-bodies provided these private benefits while fulfilling a mandate to provide high-quality technical advice to the parties about feasible reductions. Moreover, although providing this advice was TEAP’s official job, the same activities provided other benefits to the regime as well. The processes of solving problems, refining known options, and evaluating new ones that occurred in and around TEAP’s bodies repeatedly identified opportunities to reduce chemical use beyond existing regulatory targets. Moreover, participants’ growing enthusiasm about the success of the process and their stature in their industries made them willing and able to act as missionaries, instructing their peers about reduction options and exhorting them to join the effort. These processes helped advance the margins of what reductions were feasible, and of what reductions were actually achieved–contributions of a fundamentally different character from TEAP’s official job of advising the parties. These contributions reflect a basic distinction between assessments of technological options and of scientific knowledge: Technology assessments have much greater capability to alter the reality they are assessing. Indeed, their effectiveness in doing so should be one major criterion of their success.

Motivating private-sector participation is one basic challenge of technology assessment. Keeping the process credible is the other. Any attempt to harness private interests for a public purpose runs the risk that private interests will distort or impair the pursuit of public ends to serve their own. TEAP avoided capture by status-quo interests, which so often causes technology assessments to deadlock. But it still had to manage the subtler risk of biased judgments favoring particular technologies, firms, or industries. Professional norms, explicit ground rules, and the personal integrity of participants provided some protection against this, but stronger controls were also needed. TEAP managed this risk through the mandates, membership, and operations of its working groups. Each group’s participants were chosen not only for their overlapping expertise but also for their divergent material interests. Some participants had interests in particular technical options but were balanced by advocates of other approaches. Moreover, although all producers of alternatives shared a general interest in a rapid transition from CFCs, their interests were balanced by those of the user firms that would bear the cost if the transition was too fast. Participants with high levels of closely overlapping expertise subjected all technical claims to vigorous questioning and criticism, thereby disciplining and restraining any attempts to advance claims that were weakly supported, exaggerated, or biased.

In sum, the success of technology assessment in the ozone regime depended on three conditions. First, the problems to be solved were difficult enough and focused enough that technical workgroups assembled from multiple organizations provided a crucial boost to the capacity to solve them. Second, it was possible to assemble workgroups with enough overlapping technical expertise to provide this incremental capability, but with material interests divergent enough that their discussions would reveal and restrain partisan claims. Finally, participants had private interests that could be advanced through the process, interests that were strong enough to motivate them to participate but not so strong and competitive that they were preoccupied with maneuvering for individual advantage. The principal private interest was a need for help in meeting present and anticipated regulatory controls. Another was commercial opportunities that would emerge from the assessment process or the transition it was supporting. The relative success of TEAP’s many activities shows these private interests to be crucial, in that participants had to be willing to share their knowledge and expertise not only with government officials, but also with each other. When private interests in the success of the process were not strong enough because participants believed they could block further controls, the assessment failed to attract serious participation and produced reports that were technically weak and more likely to be challenged. When individual, rival private interests were too strong (for example, when participants thought the panels’ judgments were likely to confer large gains or losses on particular firms) those interests obstructed participants’ willingness to share information and ideas openly.

Climate change

In contrast to the ozone issue, assessment of technological options to mitigate climate change has thus far been ineffective. This job falls within the mandate of the IPCC’s Working Group 3, which has used the same organization and procedures as the rest of the IPCC. Comprehensive assessments of mitigation are conducted by large chapter teams of independent scientists, drawn principally from universities, research institutions, governments, and NGOs. Chapter groups are organized around broad issues in mitigation, not specific problems of reducing emissions in particular industries or uses. Collectively authored chapters and their summaries go through lengthy rounds of scientific and government review, with all comments and authors’ responses documented, while separate summaries for policymakers are negotiated line-by-line by government representatives. This process pursues the legitimate aims of rigorous peer review, transparency, and democratic accountability, but it is unwieldy and time-consuming and gives control over the most prominent assessment output to an intergovernmental body. Unsurprisingly, participation by private-sector experts has been minimal. IPCC assessments of atmospheric science have been prominent and high in quality, but its assessments of mitigation options have been broad, diffuse, and technically uneven. They have provided neither useful guidance to policymakers nor practical contributions to emission reductions such as those TEAP provided for ozone.

This is an important missed opportunity. A more effective process could advance policy debate on climate change and directly reduce greenhouse gas emissions. There are large structural differences between the climate and ozone issues, of course. The scale, diversity, and importance of the human activities causing environmental burden are much greater for climate. That makes the wholesale application of ozone policies (in particular, the radically simplifying approach of cutting the offending activities to zero) deeply suspect, if not impossible. But these differences need not preclude the application of the model of technology assessment developed for ozone, so long as the corresponding conditions for success are present.

The most important requirement is that industry participants have strong enough private interest in the assessment group’s success.

These conditions can be present for the greenhouse gas problem if the problem is carved into manageable pieces. No body has the expertise to assess the entire greenhouse gas mitigation problem with specificity, detail, and authority. But such assessments can be done for many separate subcomponents of the problem. In fact, although the diversity of emitting activities obstructs both comprehensive assessment of mitigation and development of comprehensive policies to control emissions, the same diversity represents an advantage in separating and assessing manageable subproblems. The activities or technologies whose conditions are most favorable for this model of assessment–the low-hanging fruit–can be pursued first.

The conditions that identify promising pieces of the mitigation problem correspond to those that facilitated TEAP’s success. First, the technological questions addressed must be such that individual organizations find them too hard or not sufficiently rewarding to solve by themselves, so that multiorganizational technical teams are necessary. This will likely be the case for problems that require inputs from several complementary areas of expertise unlikely to be found within one company but focused enough so that relevant domains of expertise can be identified with reasonable confidence. Second, it must be possible to limit the risk of capture by one point of view through appropriate assembly and management of workgroups. To the extent that participants’ interests in the group’s outputs diverge from the public interest, they must also diverge from each other. Participants thus would be motivated to police each other’s claims, yet have enough overlapping expertise to do this policing effectively. There is tension between these two conditions, which must be balanced appropriately for each problem and workgroup. Increasing the overlap of participants’ expertise can increase the group’s ability to restrain partisan claims, but expanding expertise in a broader set of relevant technologies can increase the workgroup’s capability. An additional condition is that participating companies must not perceive strong competitive advantages (for example, the fate of specific proprietary technologies) turning on the assessment’s outputs, lest they withhold or selectively reveal information for individual advantage.

The most important requirement is that participants have strong enough private interest in the group’s success. A firm deciding whether to join an assessment must consider not only the direct consequences of the technical deliberations but also the consequences of regulation or other policy likely to follow from the assessment. The simplest case in which firms might perceive enough benefit to participate would be when they judge the assessment likely to advance the development of true “no regrets” reduction options: those that are advantageous to adopt even when the implicit price of emissions is zero. Firms might benefit from adopting such options through cost reductions, improved yields, or improved products. The amount of reduction available from such options is controversial; engineering cost analyses consistently show that many such existing opportunities are not pursued, presumably because of unmeasured costs or other obstacles. But even if the pool of such options across the economy is modest, sectors or technologies where they appeared particularly abundant would be promising areas to concentrate the initial work of technology assessment bodies.

The strongest motivation for companies and industries to participate in technical assessments comes from existing and anticipated regulatory restrictions.

A second class of opportunities would arise from reduction options with modest costs in relatively cartelized industries: those with substantial concentration of production, barriers to entry, and inelastic output markets. In such industries, the largest barrier to adopting costly environmental technology is that the first firm doing so risks a large penalty from losing business to the others. But all could move together with no competitive effects and small overall cost. In these situations, the assessment process would serve not just to identify and develop options to reduce emissions but also to coordinate their adoption by competing companies so that none is uniquely penalized.

The strongest motivation for companies and industries to participate in such technical assessments, however, comes from existing and anticipated regulatory restrictions. Many firms participating in TEAP received and recognized other benefits as the assessment process continued. But it was the need for help in meeting impending regulatory restrictions that got them in the door initially, and this motivation remained important through 10 years of assessments. Any significant regulatory controls on greenhouse gases would immediately create similar incentives for industry to pursue mitigation options and participate in collaborative assessment to help identify them. But the greenhouse-gas policies in place at present are few and weak. The effective cost of emitting remains zero in most of the industrialized world, and will remain zero in the United States under current policy. (Speculative emission trades are now taking place at prices above zero, but these reflect bets on future policies, not the effect of present ones.) Those countries that implement the Kyoto targets might face a substantial emissions price, depending on how liberally they grant credit for buying fictitious cuts from Russia or Eastern Europe. Even without U.S. participation, the financial and technological resources of firms facing high emission prices in these countries may be sufficient to initiate a positive feedback between emission-reducing innovations and regime tightening. This suggests that an ozone-style technology assessment process may bring significant benefits even with the United States outside the regime. If U.S.-based multinationals with operations in those countries also choose to participate, the feedback could spread to the United States. That would weaken political opposition to emission cuts and create a powerful economic constituency favoring them, even while U.S. policy continues to lag the rest of the world and to maintain an emission price of zero.

Although policies putting a price on emissions must pass some minimal threshold to attract managers’ attention, their stringency can be modest initially. Even a price of a few dollars per ton of carbon will bear heavily on some businesses. That will make some further emission-reduction technologies cost-effective or worth pursuing in the expectation that they soon will be. Small initial steps can initiate positive feedback such as operated in ozone, particularly if there are spillovers from the sectors or technologies most affected by early policies to other technologies or sectors.

Moreover, sophisticated firms respond not just to regulations and policies in place but also to their expectations of future ones. Regulations already enacted create the strongest interest in pursuing emission-reduction options, but similar interests arise from developing the capacity to meet anticipated controls or forestalling threatened ones, if the threat is sufficiently salient and credible. Strong public and political concern may suffice to create a perceived medium-term risk of emission restrictions. This risk may be particularly salient for businesses that are most vulnerable to the threat of regulations: those with large, highly concentrated emissions sources or those that expect to have disproportionate burdens from abatement. Even firms that do not perceive the risk strongly enough to invest in developing alternatives themselves may be willing to participate in collaborative processes to do so in order to gather information, develop expertise, and identify specific risks and opportunities from potential regulations.

Identifying specific pieces of the greenhouse-gas problem that appear most promising would likely require separate preliminary consultation and assessment, updated periodically in response to changing technological, economic, and policy conditions. Even before such a systematic search, however, plausible candidates for near-term attention can be identified. These might include, for example:

  • Process efficiency improvements in major energy-consuming industrial sectors such as steel, smelting, chemicals, and pulp and paper
  • Fuel efficiency of vehicles, particularly automobiles and light trucks
  • Energy efficiency of major household appliances
  • Separation and sequestration of carbon from fossil fuels, either at the point of combustion or upstream
  • Industrial emissions of gases with high global warming potential such as perfluorocarbons, hydrofluorocarbons, and sulfur hexafluoride

Note that this list includes various consumer products for which it has long been suggested that purchasers do not adequately value efficiency. But it excludes energy-converting capital equipment for which efficiency advances can confer decisive competitive advantages, such as gas turbines, industrial furnaces and boilers, or photovoltaic cells.

For these assessments to succeed, their institutional setting must also meet certain conditions. To attract the participants and achieve the working conditions needed for success, workgroups will require substantial independence from external oversight so that they can maintain efficient, flexible, and confidential proceedings and retain full control over their outputs. But achieving salience and credibility in policy arenas will require some official standing with governmental or intergovernmental bodies. Government sponsorship and participation will probably be required to provide antitrust approval and institutional continuity between specific task groups and to help ease bureaucratic or policy obstacles to attractive mitigation options.

The IPCC is the single official source of authoritative scientific and technical information on climate change, but its design and procedures make it incapable of conducting assessments of the type proposed. Modifying or suspending the IPCC’s principles and procedures to let such assessments operate within it is an unlikely prospect, so these assessments will most likely have to operate outside it. Various specific institutional arrangements could be considered. Separate ad hoc assessment bodies for particular problems could operate as consultants to an intergovernmental body, either to the IPCC (as with one group contributing to the 2001 assessment) or to some body under the Climate Convention. Such an advisory relationship would provide the official status helpful in gaining policy attention and administrative continuity, while the group’s independence can be protected by publishing its reports directly in addition to providing them to the sponsoring body. Alternatively, assessment bodies could be established as independent NGOs, which could seek joint sponsorship of each assessment by multiple governmental and intergovernmental organizations and make their reports and briefings available to officials and negotiators. A higher-level process will be needed to identify tasks ripe for assessment and provide institutional memory. Unlike the assessments themselves, this task could fall to an IPCC body or to informal consultations involving IPCC and Climate Convention officials, industry representatives, and independent experts.

Whatever institutional setting is chosen, a technology assessment process similar to that used for ozone-depleting chemicals holds the most promise of harnessing the creativity and energy of private industry toward substantial reductions of greenhouse-gas emissions. Such assessments can create a mutually reinforcing feedback with sensible mitigation policies. Any mitigation policy will promote effective technical assessment of mitigation, while successful assessments will clarify and facilitate sensible mitigation policy. Even if the initial steps are small–assessments for a few targeted sectors or technologies that represent low-hanging fruit, and modest (but real) mitigation policies in several Organization for Economic Cooperation and Development countries–setting these interactions in motion may be the most effective step that can be taken now to chip away at the present policy deadlock.

What Americans Know (or Think They Know) About Technology

The United States is a world leader in developing and using new technology, and this is widely recognized as being largely responsible for the country’s economic success. One would expect Americans to be very knowledgeable about technology and about national technological literacy. The reality is that very little is known about U.S. technological literacy. In an effort to help fill this data gap, the International Technology Education Association (ITEA), which represents the interests of technology teachers, commissioned a Gallup poll on technological literacy and released the results in January 2002. Complete results are available at www.iteawww.org.

The poll’s 17 questions can be divided into three groups: those that tested conceptual understanding of technology, those that tried to gauge practical knowledge of specific technologies, and those that assessed opinion about the importance of the study of technology. This analysis focuses on the first two sets of questions.

The results of this poll are barely a beginning to the complex task of understanding the public’s technological literacy. But the evidence that most Americans have an extremely narrow view of what comprises technology indicates that technological literacy falls far short of what the country needs. If nothing else, the ITEA/Gallup poll points to the need for more rigorous assessment of what Americans know about technology.

What’s technology?

Most revealing, and discouraging to those pushing for greater technological literacy, was the finding that a majority of U.S. citizens hold a very limited view of technology. Sixty-seven percent of those surveyed answered “computers” when asked to name the first thing that came to mind when they heard the word technology. A significantly greater proportion (78 percent versus 57 percent) of younger (age 18 to 29) than older (age 50+) Americans displayed this narrow conception. The next-most-often-cited response, “electronics,” was named by only 4 percent of those polled.

TABLE 1.
When you hear the word “technology,” what first comes to mind?

List of Mentions Total Group
%
18-29 Year Olds
%
Age 50 and Older
%
Computers 67 78 57
Electronics 4 4 4
Education 2 3 4
New Inventions 2 2 2
Internet 1 2 2
Science 1 2
Space 1 1
Job/work 1 2

Note: Numerous other responses were received; however, no others were mentioned by more than 1 percent.

Table 2

Isn’t it more than computers?

Even when offered the option of the broader definition of technology as “changing the natural word to meet our needs,” nearly twice as many respondents chose the narrow description of “computers and the Internet” to describe technology.

Isn’t it more than physical products?

Technology, of course, is not just electronics nor is it just physical products. It includes the processes used to create those products, notably engineering design, as well as the systems in which those products are used. How do Americans perceive design in the context of technology? Given the choice of defining design as either a “creative process for solving problems” or as “blueprints and drawings from which you construct something,” 59 percent chose the latter. Although the second definition is not wrong, the first reflects more fully the role of design in technology creation.

Table 3

Do you know how it works?

Americans express confidence in their ability to explain the workings of certain everyday technologies. Ninety percent said they could explain how a flashlight works. Seventy percent indicated they could explain the workings of a home-heating system. Far fewer were confident in their understandings of how telephone calls travel from point to point (65 percent) and how energy is converted into electrical power (53 percent). In all questions of this type, men expressed more confidence than women. Sometimes the difference was dramatic: 86 percent of men compared with 55 percent of women said they understood how heating systems operate.

The oft-repeated jokes about the reluctance of men to ask for directions are reason enough to suspect that men and women differ in how they assess their own knowledge. Besides, it’s far from clear what is meant when someone claims to “know how a flashlight works.” A few factual questions included in the poll also indicated that it would be hard to underestimate what people know about how technology functions. For example, 46 percent of respondents incorrectly believe that using a portable phone in the bathtub creates a danger of electrocution.

TABLE 4.
Let me ask you if you could explain each of the following to a friend; just answer “yes” or “no.” Could you explain?

Explanation Requested Yes Response
% Total % Men % Women
How a flashlight works 90 96 83
How to use a credit card to get money out of an ATM 89 92 86
How a telephone call gets from point A to point B 65 76 54
How a home heating system works 70 86 55
How energy is transferred into electrical power 53 72 36

Technological democracy

Encouragingly, a substantial majority of Americans, ranging from 78 to 88 percent, felt that they should have a say in decisions involving technology, such as the development of fuel-efficient cars and genetically modified foods, and the construction of roads in their community. If people want a say in these decisions about technology, they have an incentive to learn more about technology.

TABLE 5.
Tell me, how much input do you think you should have in decisions in each of the following areas — a great deal, some, not very much, or none at all?

Decisions Great +
Some
%
Great
Deal
%
Some
%
Not Very
Much
%
None
at All
%
Don’t Know/
Refused
%
Designation of neighborhood community centers 87 43 47 6 3 1
Where to locate roads in your community 88 44 44 8 3 1
Development of fuel-efficient cars 81 37 44 10 8 1
Development of genetically modified foods 78 41 37 10 11 1

Making Sense of Government Information Restrictions

New moves by the Bush administration to curtail public access to certain types of government information on security grounds have set off alarms among scientists, public interest groups, and concerned citizens, who foresee a veil of indiscriminate secrecy descending around their work and obstructing their activities. Indeed, there has already been a remarkable diversity of new restrictions on access to information, leading to the removal of many thousands of pages from government Web sites and the withdrawal of thousands of government technical reports from public access. In one case, government depository libraries around the country were ordered to destroy their copies of a U.S. Geological Survey CD-ROM on U.S. water resources. A close examination of the administration’s emerging information policies reveals a number of defects in their conception and execution but also suggests some options for moving beyond mere controversy toward a resolution of the competing interests at stake.

The new restrictions on public access to government information have been undertaken in a largely ad hoc and sometimes knee-jerk fashion. Although the need to respond quickly to an uncertain security environment by imposing temporary controls on an amorphous body of materials is understandable, this is not a satisfactory approach in the long term. Among other things, it is inconsistent with the body of law and policy that governs information disclosure and lacks the associated safeguards against abuse.

The Freedom of Information Act (FOIA) is the law that gives the public the legal right of access to government information. At the same time, however, it also provides legal authorization for the government to withhold information that fits within one or more of its nine exemptions, including classified national security information, proprietary information, and privacy information.

Several of the new restrictions on information are not congruent with the existing legal framework defined by FOIA or with the executive order that governs national security classification and declassification. For example, the administration makes a distinction between hard copy documents (deemed less sensitive) and Web-based documents (deemed more sensitive) that is not recognized in law. Likewise, some agencies are attempting to impose controls on documents that have been declassified under proper authority and publicly released, which is not permitted under current guidelines, and which is probably futile.

Perhaps the clearest case of bad policy is to be found in a March 19, 2002, White House memorandum to executive branch agencies, urging them to withhold “sensitive but unclassified information related to America’s homeland security.” This is bad policy because no one knows what it means. The meaning of “unclassified” is clear, of course, but the crucial term “sensitive” is not defined. This is a problem, because agencies may have many reasons for considering information sensitive that have nothing to do with national security. They may, for example, wish to evade congressional oversight, to shield a controversial program from public awareness, or to manipulate the political system through strategic withholding and disclosure of information. The failure to provide a clear definition of “sensitive but unclassified information” points to the need for greater clarity in government information policy that encompasses legitimate security concerns while upholding the virtues of public disclosure.

Start making sense

Crafting a new policy that responds to sometimes competing interests in security and public access should not be an extraordinarily difficult task. In the first place, most government information will be self-evidently subject to disclosure under FOIA or else clearly exempt from disclosure under the provisions of that law. These are easy cases where the proper legal course of action is obvious. But there will be certain types of information that form an ambiguous middle ground, to which the law has not yet caught up. This may be information that was formerly available on Web sites but has now been removed, or records that were officially declassified and released but have now been withdrawn. It is everything that might conceivably be considered “sensitive but unclassified.”

In deciding how to treat such information, the administration should enunciate a clear set of guiding principles, as well as an equitable procedure for implementing them and allowing for appeal of adverse decisions. The guiding principles could be formulated as a set of questions, such as these:

Is the information otherwise available in the public domain? Or can it be readily deduced from first principles? If the answer is yes, then there is no valid reason to withhold it, and doing so would undercut the credibility of official information policy.

Is there specific reason to believe the information could be used by terrorists? Are there countervailing considerations that would militate in favor of disclosure; that is, could it be used for beneficial purposes? Documents that describe in detail how anthrax spores could be milled and coated so as to maximize their dissemination presumptively pose a threat to national security and should be withdrawn from the public domain. But not every document that has the word “anthrax” in the title is sensitive. And even documents that are in some ways sensitive might nevertheless serve to inform medical research and emergency planning and might therefore be properly disclosed.

Is there specific reason to believe that the information should be public knowledge? It is in the nature of our political system that it functions in response to public concern and controversy. Environmental hazards, defective products, and risky corporate practices tend to find their solution, if at all, after a thorough public airing. Withholding controversial information from the public means short-circuiting the political process and risking a net loss in security.

Of course, no set of principles will produce an unequivocal result in all cases. There will often be a subjective element to any decision to release or withhold contested information. Someone is always going to be dissatisfied. In order to forestall or correct abuses or mistaken judgments, an appeals process should be established to review disputed decisions to withhold information from the public. Placing such a decision before an appeals panel that is outside of the originating agency and that therefore does not have same bureaucratic interests at stake would significantly enhance the credibility of the deliberative process. The efficacy of such an appeals process has been repeatedly demonstrated by an executive branch body called the Interagency Security Classification Appeals Panel (ISCAP). This panel, which hears appeals of public declassification requests that have been denied by government agencies, has ruled against its own member agencies in an astonishing 80 percent of the cases it has considered.

A good-faith effort to increase the clarity, precision, and transparency of the Bush administration’s information policies, along with provisions for the public to challenge a negative result, would go a long way toward rectifying the current policy morass.

Summer 2002 Update

Progress on brownfields restoration

Since “Restoring Contaminated Industrial Sites” appeared in the Spring 1994 Issues (as well as an update in the Fall 1997 edition), Congress has debated how best to clean and redevelop moderately contaminated properties, known as brownfields. A major advancement occurred in January 2002 when President Bush signed the Brownfield Revitalization and Environmental Restoration Act. Still, a bit more is needed to overcome financial barriers.

The new law opens up the Environmental Protection Agency’s (EPA’s) brownfield program in two significant ways. First, it permits properties with petroleum contamination to take advantage of grant resources, thereby addressing the realities of the reuse process, where mixed contaminants are the norm. Also, grant recipients now will be able to use a portion of the site-assessment or cleanup grants to pay insurance premiums that provide coverage (such as for cleanup cost over-runs) for these sites. This flexibility should help prospective site reusers secure private financing more readily, because it will provide a way to better quantify and manage risk.

From a procedural perspective, the Brownfields Revitalization Act sets the stage for new public-private redevelopment partnerships by clarifying vexing liability issues that deterred site acquisition and redevelopment. Specifically, it exempts from Superfund liability contiguous property owners: those who did not contribute to the contamination and who provide cooperation and access for the cleanup. It also clarifies the innocent landowner defense to Superfund liability, making it easier to determine its applicability in specific situations.

One of the law’s most important provisions exempts prospective purchasers from Superfund liability. Those who did not know about the contamination at the time of acquisition, who are not responsible for it, and who do not impede its cleanup would not be liable. This liability protection, available for persons who acquire property after January 11, 2002, will remove a significant barrier to private-sector participation in brownfield projects and allow new owners to quantify their risk much more precisely.

The new law also clarifies the state-federal relationship regarding cleanup finality, making it easier for innovative remediation technologies and engineering controls to be used as part of a cleanup. Sites addressed through a state’s voluntary response program now are protected from EPA enforcement and cost-recovery actions. The only exceptions are at sites where contamination has migrated across state lines or onto federal property; if releases, or the threat of releases, present an imminent and substantial endangerment; if new information shows that a cleanup is no longer protective; or if a state requests federal intervention. States now will share $50 million annually to support these response programs. In return, states will need to maintain a “public record of sites” addressed through their voluntary response program, and update that record annually.

The new law also provides $200 million annually in grants through fiscal 2006 to carry out essential early-stage activities associated with brownfield cleanups, notably site assessment, remediation planning, and the actual cleanup itself. Still, more attention is needed on the financing side of brownfield reuse. The House Financial Services Committee, for instance, recently reported a bill (H.R. 2941) to decouple the Department of Housing and Urban Development’s brownfield economic development initiative (BEDI) from the Section 108 program, which would open up BEDI to independent applications from small cities for the first time. The Senate soon will mark up a bill (S. 1079) to formalize a brownfield role for the Economic Development Administration (EDA) and to authorize $60 million for EDA to carry out that mission. The House Ways and Means Committee, moreover, is set to consider a proposal making the brownfield tax-expensing incentive (targeted to cleanup, maintenance, and monitoring costs) permanent. All of these proposals, if enacted, would have significant potential to further enhance brownfield revitalization efforts.

Charles Bartsch

Improving Technological Literacy

At the heart of the technological society that characterizes the United States lies an unacknowledged paradox. Although the nation increasingly depends on technology and is adopting new technologies at a breathtaking pace, its citizens are not equipped to make well-considered decisions or to think critically about technology. Adults and children alike have a poor understanding of the essential characteristics of technology, how it influences society, and how people can and do affect its development. Many people are not even fully aware of the technologies they use every day. In short, as a society we are not technologically literate.

Technology has become so user friendly that it is largely invisible. Many people use technology with minimal comprehension of how it works, the implications of its use, or even where it comes from. We drive high-tech cars but know little more than how to operate the steering wheel, gas pedal, and brakes. We fill shopping carts with highly processed foods but are largely ignorant of the composition of those products or how they are developed, produced, packaged, and delivered. We click on a mouse and transmit data over thousands of miles without understanding how this is possible or who might have access to the information. Thus, even as technology has become increasingly important in our lives, it has receded from our view.

To take full advantage of the benefits of technology, as well as to recognize, address, or even avoid some of its pitfalls, we must become better stewards of technological change. Unfortunately, society is ill prepared to meet this goal. And the mismatch is growing. Although our use of technology is increasing apace, there is no sign of a corresponding improvement in our ability to deal with issues relating to technology. Neither the nation’s educational system nor its policymaking apparatus has recognized the importance of technological literacy.

Because few people today have hands-on experience with technology, except as finished consumer goods, technological literacy depends largely on what they learn in the classroom, particularly in elementary and secondary school. However, relatively few educators are involved in setting standards and developing curricula to promote technological literacy. In general, technology is not treated seriously as a subject in any grade, kindergarten through 12th. An exception is the use of computers and the Internet, an area that has been strongly promoted by federal and state governments. But even here, efforts have focused on using these technologies to improve education rather than to teach students about technology. As a result, many K-12 educators identify technology almost exclusively with computers and related devices and so believe, erroneously, that their institutions already teach about technology.

Most policymakers at the federal and state levels also have paid little or no attention to technology education or technological literacy. Excluding legislation focused on the use of computers as educational tools, only a handful of bills introduced in Congress during the past 15 years refer to technology education or technological literacy. Virtually none of these bills have become law, except for measures related to vocational education. Moreover, there is no evidence to suggest that legislators or their staffs are any more technologically literate than the general public, despite the fact that Congress and state legislatures often find themselves grappling with policy issues that require an understanding of technology.

It is imperative that this paradox, this disconnect between technological reality and public understanding, be set right. Doing so will require the cooperation of schools of education, schools of engineering, K-12 teachers and teacher organizations, developers of curriculum and instructional materials, federal and state policymakers, industry and nonindustry supporters of educational reform, and science and technology centers and museums.

What is technology?

In the broadest sense, technology is the process by which humans modify nature to meet their needs and wants. However, most people think of technology only in terms of its tangible products: computers and software, aircraft, pesticides, water-treatment plants, birth-control pills, and microwave ovens, to name a few. But the knowledge and processes used to create and operate these products–engineering know-how, manufacturing expertise, various technical skills, and so on–are equally important. An especially critical area of knowledge is the engineering design process, of starting with a set of criteria and constraints and working toward a solution–a device, say, or a process–that meets those conditions. Technology also includes the infrastructure necessary for the design, manufacture, operation, and repair of technological artifacts. This infrastructure includes corporate headquarters, manufacturing plants, maintenance facilities, and engineering schools, among many other elements.

Technology is a product of engineering and science. Science has two parts: a body of knowledge about the natural world and a process of enquiry that generates such knowledge. Engineering, too, consists of a body of knowledge (in this case, knowledge of the design and creation of human-made products) and a process for solving problems. Science and technology are tightly coupled. A scientific understanding of the natural world is the basis for much of technological development today. The design of computer chips, for instance, depends on a detailed understanding of the electrical properties of silicon and other materials. The design of a drug to fight a specific disease is made possible by knowledge of how proteins and other biological molecules are structured and interact.

Conversely, technology is the basis for a good part of scientific research. Indeed, it is often difficult, if not impossible, to separate the achievements of technology from those of science. When the Apollo 11 spacecraft put Neil Armstrong and Buzz Aldrin on the moon, many people called it a victory of science. Similarly, the development of new types of materials or the genetic engineering of crops to resist insects are usually attributed wholly to science. Although science is integral to such advances, however, they also are examples of technology–the application of unique skills, knowledge, and techniques, which is quite different from science.

Technology also is closely associated with innovation, the transformation of ideas into new and useful products or processes. Innovation requires not only creative people and organizations but also the availability of technology and science and engineering talent. Technology and innovation are synergistic. The development of gene-sequencing machines, for example, made the decoding of the human genome possible, and that knowledge is fueling a revolution in diagnostic, therapeutic, and other biomedical innovations.

Hallmarks of technological literacy

As with literacy in reading, mathematics, science, or history, the goal of technological literacy is to provide people with the tools to participate intelligently and thoughtfully in the world around them. The kinds of things a technologically literate person must know can vary from society to society and from era to era. In general, technological literacy encompasses three interdependent dimensions: knowledge, ways of thinking and acting, and capabilities. Although there is no archetype of a technologically literate person, such a person will possess a number of general characteristics. Among such traits, technologically literate people in today’s U.S. society should:

Recognize technology in its many forms, and understand that the line between science and technology is often blurred. This will quickly lead to the realization that technology permeates modern society, from little things that everyone takes for granted, such as pencils and paper, to major projects, such as rocket launches and the construction of dams.

Understand basic concepts and terms, such as systems, constraints, and tradeoffs that are important to technology. When engineers speak of a system, for instance, they mean components that work together to provide a desired function. Systems appear everywhere in technology, from the simple, such as the half-dozen components in a click-and-write ballpoint pen, to the complex, such as the millions of components, assembled in hundreds of subsystems, in a commercial jetliner. Systems also can be scattered geographically, such as the roads, bridges, tunnels, signage, fueling stations, automobiles, and equipment that comprise, support, use, and maintain the nation’s network of highways.

Technological literacy is more a capacity to understand the broader technological world than it is the ability to work with specific pieces of it.

Know something about the nature and limitations of the engineering design process. The goal of technological design is to meet certain criteria within various constraints, such as time deadlines, financial limits, or the need to minimize damage to the environment. Technologically literate people recognize that there is no such thing as a perfect design and that all final designs involve tradeoffs. Even if a design meets its stated criteria, there is no guarantee that the resulting technology will actually achieve the desired outcome, because unexpected and often undesirable consequences sometimes occur alongside intended ones.

Recognize that technology influences changes in society and has done so throughout history. In fact, many historical ages are identified by their dominant technology: the Stone Age, Iron Age, Bronze Age, Industrial Age, and Information Age. Technology-derived changes have been particularly evident in the past century. Automobiles have created a more mobile, spread-out society; aircraft and advanced communications have led to a “smaller” world and, eventually, globalization; contraception has revolutionized sexual mores; and improved sanitation, agriculture, and medicine have extended life expectancy. Technologically literate people recognize the role of technology in these changes and accept the reality that the future will be different from the present largely because of technologies now coming into existence, from Internet-based activities to genetic engineering and cloning.

Recognize that society shapes technology as much as technology shapes society. There is nothing inevitable about the changes influenced by technology; they are the result of human decisions and not of impersonal historical forces. The key people in successful technological innovation are not only engineers and scientists but also designers and marketing specialists. New technologies simply meet the requirements of consumers, business people, bankers, judges, environmentalists, politicians, and government bureaucrats. An electric car that no one buys might just as well never have been developed, and a genetically engineered crop that is banned by the government is of little more use than the weeds in the fields. The values and culture of society sometimes affect technology in ways that are not immediately obvious, and technological development sometimes favors the values of certain groups more than others. It has been argued, for example, that such development traditionally has favored the values of males more than those of females and that this factor might explain why the initial designs of automobile airbags were not appropriate to the smaller stature of most women.

Understand that all technologies entail risk. Some risks are obvious and well documented, such as the tens of thousands of deaths each year in the United States from automobile crashes. Others are more insidious and difficult to predict, such as the growth of algae in rivers caused by the runoff of fertilizer from farms.

Appreciate that the development and use of technology involve tradeoffs and a balance of costs and benefits. For example, preservatives may extend the shelf life and improve the safety of our food but also cause allergic reactions in a small percentage of individuals. In some cases, not using a technology creates added risks. Thus, technologically literate people will ask pertinent questions, of themselves and others, regarding the benefits and risks of technologies.

Be able to apply basic quantitative reasoning skills to make informed judgments about technological risks and benefits. Especially important are mathematical skills related to probability, scale, and estimation. With such skills, for example, individuals can make reasonable judgments about whether it is riskier to travel from St. Louis to New York on a commercial airliner or by car, based on the known number of fatalities per mile traveled for each mode of transportation.

Possess a range of hands-on skills in using everyday technologies. At home and in the workplace, there are real benefits of knowing how to diagnose and even fix certain types of problems, such as resetting a tripped circuit breaker, replacing the battery in a smoke detector, or unjamming a food-disposal unit. These tasks are not particularly difficult, but they require some basic knowledge and, in some cases, familiarity with simple hand tools. The same can be said for knowing how to remove and change a flat tire or hook up a new computer or telephone. In addition, a level of comfort with personal computers and the software they use, and being able to surf the Internet, are essential to technological literacy.

Seek information about particular new technologies that may affect their lives. Equipped with a basic understanding of technology, technologically literate people will know how to extract the most important points from a newspaper story, television interview, or discussion; ask relevant questions; and make sense of the answers.

Participate responsibly in debates or discussions about technological matters. Technologically literate people will be prepared to take part in public forums, communicate with city council members or members of Congress, or in other ways make their opinions heard on issues involving technology. Literate citizens will be able to envision how technology (in conjunction with, for example, the law or the marketplace) might help solve a problem. Of course, technological literacy does not determine a person’s opinion. Even the best-informed citizens can and do hold quite different opinions depending on the question at hand and their own values and judgments.

A technologically literate person will not necessarily require extensive technical skills. Such literacy is more a capacity to understand the broader technological world than it is the ability to work with specific pieces of it. Some familiarity with at least a few technologies will be useful, however, as a concrete basis for thinking about technology. Someone who is knowledgeable about the history of technology and about basic technological principles but who has no hands-on capabilities with even the most common technologies cannot be as technologically literate as someone who has those capabilities.

But specialized technical skills do not guarantee technological literacy. Workers who know every operational detail of an air conditioner or who can troubleshoot a software glitch in a personal computer may not have a sense of the risks, benefits, and tradeoffs associated with technological developments generally and may be poorly prepared to make choices about other technologies that affect their lives. Even engineers, who have traditionally been considered experts in technology, may not have the training or experience necessary to think about the social, political, and ethical implications of their work and so may not be technologically literate. The broad perspective on technology implied by technological literacy would be as valuable to engineers and other technical specialists as to people with no direct involvement in the development or production of technology.

Laying the foundation

In order to improve technological literacy, the most natural and important place to begin is in schools, by providing all students with early and regular contact with technology. Exposing students to technological concepts and hands-on, design-related activities is the most likely way to help them acquire the kinds of knowledge, ways of thinking and acting, and capabilities consistent with technological literacy. However, only 14 states now require some form of technology education for K-12 students, and this instruction usually is affiliated with technician-preparation or school-to-work programs. In 2000, the Massachusetts Board of Education added a combined engineering/technology component to its K-12 curriculum, becoming the first state to explicitly include engineering content. Elsewhere, a few schools offer stand-alone courses at all grade levels, but most school districts pay little or no attention to technology. This is in stark contrast to the situation in some other countries, such as the Czech Republic, France, Italy, Japan, the Netherlands, Taiwan, and the United Kingdom, where technology education courses are required in middle school or high school.

State boards of education can provide incentives for publishers to modify next-generation science, history, social studies, civics, and language arts textbooks to include technology content.

One limiting factor is the small number of teachers trained to teach about technology. There are roughly 40,000 technology education teachers nationwide, mostly at the middle-school or high-school level. By comparison, there are some 1.7 million teachers in grades K-12 who are responsible for teaching science. Another factor is inadequate preparation of other teachers to teach about technology. Schools of education spend virtually no time developing technological literacy in students who will eventually stand in front of the classroom. The integration of technology content into other subject areas, such as science, mathematics, history, social studies, the arts, and language arts, could greatly boost technological literacy. Without teachers trained to carry out this integration, however, technology is likely to remain an afterthought in U.S. education.

Beyond grades K-12, there are additional opportunities for strengthening technological literacy. At two-year community colleges, many courses are intended to prepare students for technical careers. As they learn new skills, these students, with proper instruction, also can develop a better understanding of the underlying technology that could be used as the basis for teaching about the nature, history, and role of technology in our lives. Colleges and universities offer a variety of options for more advanced study of technology. There are about 100 science, technology, and society programs on U.S. campuses that offer both undergraduate and graduate courses; and a number of universities have programs in the history, philosophy, or sociology of technology. Many engineering schools require that students take at least one course in the social impacts of technology. For the adult population already out of school, informal education settings, such as museums and science centers, as well as television, radio, newspapers, magazines, and other media, offer avenues for learning about and becoming engaged in a variety of issues related to technology.

A number of specific steps can help strengthen the presence of technology in both formal and informal education. For example, federal and state agencies that help set education policy should encourage the integration of technology content into K-12 standards, curricula, instructional materials, and student assessments (such as end-of-grade tests) in nontechnology subject areas.

At the federal level, the National Science Foundation (NSF) and the Department of Education can do this in a number of ways, including making integration a requirement when providing funding for the development of curriculum and instructional materials. Technically oriented agencies, such the National Aeronautics and Space Administration, the Department of Energy, and the National Institutes of Health, can support integration by developing accurate and interesting background materials for use by teachers of nontechnical subjects.

At the state level, science and technology advisers and advisory councils, of which there are a growing number, can use their influence with governors, state legislatures, and industry to encourage the inclusion of technology content not only in the general K-12 curriculum but also in school-to-work and technician-preparation programs. State boards of education can provide incentives for publishers to modify next-generation science, history, social studies, civics, and language arts textbooks to include technology content. Such incentives might come from incorporating technological themes into state educational standards or by modifying the criteria for acceptable textbooks.

States also should better align their K-12 standards, curriculum frameworks, and student assessments in the sciences, mathematics, history, social studies, civics, the arts, and language arts with national educational standards that stress the connections between these subjects and technology. Among such guidelines, the International Technology Education Association, a professional organization of technology educators, recently published Standards for Technological Literacy: Content for the Study of Technology, a comprehensive statement of what students must learn in order to be technologically literate.

Another crucial need is to improve teacher education. Indeed, the success of changes in curricula, instructional materials, and student assessments will depend largely on the ability of teachers to implement those changes. Lasting improvements will require both the creation of new teaching and assessment tools and the appropriate preparation of teachers to use those tools effectively. Teachers at all levels should be able to conduct design projects and use design-oriented teaching strategies to encourage learning about technology. This means that NSF, the Education Department, and professional organizations that accredit teachers should provide incentives for colleges and universities to transform the preparation of all teachers to better equip them to teach about technology throughout the curriculum. In preparing elementary school teachers, for example, universities should require courses or make other provisions to ensure that would-be teachers are, at the very least, scientifically and technologically literate. Science for All Americans, an educational guidebook produced by the American Association for the Advancement of Science, might well serve as a minimum standard of such literacy.

The research base related to technological literacy also must be strengthened. There is a lack of reliable information about what people know and believe about technology, as well as about the cognitive steps that people use in constructing new knowledge about technology. These gaps have made it difficult for curriculum developers to design teaching strategies and for policymakers to enact programs to foster technological literacy. Building this scientific base will require creating cadres of competent researchers, developing and periodically revising a research agenda, and allocating adequate funding for research projects. NSF should support the development of assessment tools that can be used to monitor the state of technological literacy among students and the public, and NSF and the Education Department should fund research on how people learn about technology. The findings must be incorporated into teaching materials and techniques and into formal and informal education settings.

It will be important, as well, to enhance the process by which people make decisions involving technology. One of the best ways for members of the public to become educated about technology is to engage in discussions of the pros and cons, the risks and benefits, and the knowns and unknowns of a particular technology or technological choice. Engagement in decisionmaking is likely to have a direct positive effect on the nonexpert participants, and involving the public in deliberations about technological developments as they are taking shape, rather than after the fact, may actually shorten the time and reduce the resources required to bring new technologies into service. Equally important, public participation may result in design changes that better reflect the needs and desires of society.

Industry, federal agencies responsible for carrying out infrastructure projects, and science and technology museums should provide more opportunities for the nontechnical public to become involved in discussions about technological developments. The technical community, especially engineers and scientists, is largely responsible for the amount and quality of communication and outreach to the public on technological issues. Industry should err on the side of encouraging greater public engagement, even if it may not always be clear what types of technological development merit public input. In the federal arena, some agencies already require recipients of funding to engage communities likely to be affected by planned infrastructure projects. These efforts should be expanded. The informal education sector, especially museums and science and technology centers, is well positioned to prepare members of the public to grapple with the complexities of decisionmaking in the technological realm. These institutions and the government agencies, companies, and foundations that support them could do much more to encourage public discussion and debate about the direction and nature of technological development at both the local and national level.

If informed decisionmaking is important for all citizens, then it is vital for leaders in government and industry whose decisions influence the health and welfare of the nation. With both sectors facing a daunting array of issues with substantial technological components, there is a great unmet need for accurate and timely technical information and education. Thus, federal and state agencies with a role in guiding or supporting the nation’s scientific and technological enterprise, along with private foundations concerned about good governance, should support education programs intended to increase the technological literacy of government officials (including key staff members) and industry leaders. Executive education programs could be offered in many locations, including major research universities, community colleges, law schools, business schools, schools of management, and colleges of engineering. The engineering community, which is directly involved in the creation of technology, is ideally suited to promote such programs. An engineering-led effort to increase technological literacy could have significant long-term payoffs, not only for decisionmakers but also for the public at large.

These steps are only a starting point. Numerous other actions, both large and small, also will be needed across society. The case for technological literacy must be made consistently and on an ongoing basis. As citizens gradually become more sophisticated about technological issues, they will be more willing to support measures in the schools and in the informal education arena to raise the technological literacy level of the next generation. In time, leaders in government, academia, and business will recognize the importance of technological literacy to their own well-being and the welfare of the nation. Achieving this goal promises to be a slow and challenging journey, but one that is unquestionably worth embarking on.

Environmental Policy for Developing Countries

Most developing countries have long since established laws and formal governmental structures to address their serious environmental problems, but few have been successful in alleviating those problems. The development banks, which control resources desperately needed by the developing countries, are promoting the use of economic incentives and other market-based strategies as the key to more effective environmental protection. However, the donors have rarely asked whether the approaches they are urging, which have recently had some success in Europe and the United States, can be implemented effectively in developing countries with limited resources and little experience with market-based policies of any kind.

We worry that these highly sophisticated instruments have been pushed too hard and too fast, and that those who promote them say little about the context and conditions in which they thrive. The targets of this advice should be better informed about everything they would need to do to make market-based instruments work. Otherwise, the cause of environmental protection itself may be dealt a blow when ill-conceived policies divert a country’s energies without producing the desired result. Developing-world regulators, already marginalized in their own countries, will have little to show for their efforts in terms of a cleaner environment.

Before imposing a regulatory strategy on the developing world, we should review the experience of the industrialized countries and others that have implemented market-based policies. How extensive is the experience? How successful? What have we learned about the conditions necessary for effective market-based policies? Then we will be ready to consider when and where these policies are likely to work in the developing world.

History

Although incentive-based approaches to environmental control were being developed by economists in the early 1970s when many of the basic environmental laws were being written in the United States, none of the early laws used economic instruments. Market-based tools began to make inroads in the 1980s when regulators at the U.S. Environmental Protection Agency (EPA) saw that they could be useful in dealing with difficult Clean Air Act implementation problems. Each stack at each regulated facility had been given a discharge permit. The EPA innovation allowed firms to trade those permits internally and externally, so that expensive-to-control sources could emit more and cheap-to-control sources would be encouraged to cut back.

A variation of this system was eventually written into law to create the best-known and most successful U.S. market-based instrument. To control acid rain, Title IV of the 1990 Amendments to the Clean Air Act established tradable emission allowances for sulfur dioxide. Starting in January 1995, the electric power industry in the eastern third of the nation was allocated a fixed number of total “allowances.” The rules allowed firms to bank allowances for future use, buy allowances to meet regulatory requirements rather than reduce emissions, or sell excess allowances.

Rigorous checks and balances built into the program ensure compliance, system credibility, and integrity. Utilities participating in the program were required to have expensive equipment for continuous monitoring. Every allowance is assigned a serial number. EPA records transfers to make sure that a unit’s emissions do not exceed the number of allowances it holds and makes this information available to the public. The transparency in the system provides a level of reassurance to the public and competitors alike. Noncompliance is punished.

Allowance trading has undoubtedly accelerated program implementation and saved money. Utility companies are pleased that they, rather than the government, decide the most cost-effective way to comply.

Although much is made of the success of this program, the reality is that most U.S. environmental programs continue to use traditional regulation because the alternatives pose significant technical and political challenges. Some emissions are too difficult or too expensive to monitor well enough to support trading programs. Equally important, any environmental program must be politically viable. Many in the public interest community oppose economic instruments because they fear that emissions trading cannot be adequately enforced. Some think that market-based approaches provide excuses for polluters to avoid responsibility

A lesson from this brief history is that market-based instruments have been applied gradually and cautiously in the most mature environmental protection regime, the United States. They are limited in application and some are still essentially experimental. Their gradual introduction to resolve specific problems and practical applications has helped to address these concerns and build constituencies for further use.

Many European countries have also implemented economic instruments such as taxes on fertilizer, gasoline, and other polluting inputs. For example, Germany, France, and the Netherlands have effluent charge systems. Most of these innovations are aimed at raising revenue for infrastructure investment rather than encouraging pollution reduction. Charge levels, set too low to provide an incentive for discharge reduction, instead guarantee a fairly regular income stream.

Ironically, there are even examples from the countries of the communist bloc, most of which used fees and fines on emissions as basic tools of environmental protection. However, pollution charges were paid out of the soft budgets of state enterprises and therefore had little chance of influencing enterprise behavior. Ultimately, they have become pay-to-pollute schemes whose revenues support government environment agencies..

After the fall of communism, the multilateral development banks and the Western industrialized countries promoted market-based instruments to a Central European audience eager for alternatives to central planning. They seemed the right targets for this message, as they are in most respects “developed” industrialized economies rather than “developing” countries. Typically, they have excellent universities, high rates of literacy, a technically trained civil service, and an existing system of environmental regulation.

For the most part, economic instruments have not taken hold in the countries in transition. Demonstration emissions-trading and transferable-permit systems with a handful of managed trades were actively pursued in Kazakhstan, Poland, and the Czech Republic. An elaborate Slovak system is scheduled to begin in 2002. But these were only experiments, and they did not deliver on their promise of enabling these countries to avoid the mistakes committed in the name of environmental protection in the West.

With hindsight we can see that these countries simply lacked many of the prerequisites for an effective market-based approach. And we should keep in mind that in many ways these countries are stronger candidates for market strategies than are the developing countries. Among the missing ingredients for success were:

Bone-deep understanding of markets. The actors in complex market transactions must have considerable skills that did not exist in the countries of the former Soviet bloc. Before 1989, scholars studied non-Marxist economics, but industrial managers experienced an economy structured under the rules of state socialism, without profit and loss, Western accounting principles, or a stock market. A few countries had retained a trading mentality in small businesses, but major industry faced a steep learning curve before it could assume the responsibilities of pollution trading.

Reliable recourse when deals fail. Emissions trading is the purchase and sale of paper instruments that represent a right to pollute in the future. These are complex intangible property rights, subject to the normal hazards of commercial transactions. Sellers may default, and buyers may go into bankruptcy. False accounting is a peril that has led to criminal investigations in the United States. Someone must police trades and ensure their integrity. This arbiter can be the environmental authority, another administrative body, or the courts, but it must exist.

We must develop a better understanding of the conditions in each country that influence the performance of specific policy instruments.

Donor advice on emissions trading, however, did not distinguish between countries with working legal systems and those without. In the early days of the transition in particular, such institutions were in short supply. Some of the westernmost countries in transition were beginning to re-establish a European legal system free of political and economic “safety valves,” as Daniel Cole of Indiana University Law School has characterized the “legal means of last resort” by which party and state authorities avoided their own rules throughout the period of Soviet dominance. To the east, Russia and the other parts of the Soviet Union never really had rule-of-law traditions.

Ensuring integrity. Everyone, particularly the public, must believe that trading partners are honoring their commitments. In the United States, where environmental regulation is a very contentious subject, trust is developed through high levels of transparency. Permit requirements, emissions data, and the transactions themselves are all available for inspection by the public. The relative ease with which they can monitor specific transactions and know whether industry is meeting its commitments has helped to convince numerous stakeholders, including economic competitors, nongovernmental organizations, and the public interest community, to go along with unconventional programs.

When trades are made under public scrutiny, there are fewer reasons to be concerned that differential treatment of polluters will provide opportunities for corruption. But the experience of the citizens of the Soviet bloc countries over the past 45 years leaves most citizens acutely aware of how quickly discretion can be hijacked to serve the purposes of people in power, rather than the environment. Emissions trading programs might work without as much transparency as the United States demands. In Western Europe, the public is more tolerant when industry and government sit down to negotiate. But it is clearly an issue that architects of any trading program must consider, and it requires special consideration in countries struggling with endemic corruption.

Knowing the real cost of compliance. In the United States, industry is motivated to participate in emissions trading by the economic pain firms have experienced in investing in compliance. Industry’s capacity to sort this out was honed by a century of experience with cost accounting. One reason firms comply is because they have a clear expectation of consistent and reliable enforcement.

The hard realities of environmental compliance were basically unknown to industry in the Soviet bloc countries. Regulatory bodies were weak. The laws were characterized by scholars as “aspirational,” stating idealistic ambitions not connected with day-to-day reality. Environmental requirements were routinely excused in favor of meeting production goals.

When firms must grapple with authentic–rather than theoretical–environmental regulation, they develop a good grasp of the real costs to them of regulation and of what it takes to reach compliance. This motivates firms to look for cheaper ways to reach the required standards. We have seen no evidence that industries will theoretically come to the conclusion that emissions trading will be a cheaper way of achieving compliance than directed regulation. Why try to save money on regulation if you are not expending any to begin with and don’t expect to in the future? Compliance practices are beginning to change in a few of the countries in transition, but even today, in most countries environmental enforcement is no more rigorous than it was during the Soviet period, and is likely weaker because of the general confusion.

Genuine monitoring. Any system with the goal of regulating firms releasing pollution to the environment requires knowledge–not a guess– of what each plant is actually discharging. Trading complicates the situation by sanctioning changes in the amount of permitted discharge from each source. Regulators and the public must be assured that real pollution reductions are being traded.

Monitoring in the former Soviet bloc usually measured ambient air quality, not what pollutants plants released at the end of their discharge pipes. In truth, no one could be sure what particular factories were emitting and whether they were meeting legal requirements. For some purposes, there are alternatives to monitoring, such as emissions estimates using the sulfur content of coal. But the accuracy of estimates depends on a number of assumptions, including that the pollution control equipment has been turned on and has been maintained so that it is capable of performing the assumed level of removal. These are not always safe assumptions in countries with rampant corruption and systematic noncompliance.

This brief review makes clear that the former Soviet bloc had considerable economic, cultural, and political baggage that was directly relevant to the introduction of any state-of-the-art environmental tools, including market-based instruments. Nevertheless, the donor and assistance literature we have reviewed was content to limit discussion of these critical issues to vague comments, such as that market-based instruments are effective “if implemented under the right conditions,” according to publications sponsored by the Regional Environmental Center for Central and Eastern Europe. The advice suggests a naiveté about the audience. Many of the promoted solutions would have required fairly fundamental commitments on the part of governments–not just environment ministries–that had no interest in the environment.

One could argue that no harm is done in encouraging countries to aim for the most sophisticated policies, but the reality is that resources and political will are limited. These countries cannot afford the luxury of failed experiments. An opportunity was lost to invest that time and money in less ambitious projects that might have produce a sense of accomplishment and some small environmental gains. The donors’ failure to admit the serious shortcomings in the recipient countries, or to point out the many issues that surrounded the limited introduction of market-based instruments in the United States, had a cost. Environmental professionals in the countries in transition should have been informed that they could not make this leap without constructing supporting institutions.

What about the developing world?

The key elements discussed above–accurate monitoring, transparency, a working legal system, and a realistic incentive to trade–are at least as scarce in the developing as in the transitioning world. Corruption, favoritism, and poor environmental enforcement are features of both landscapes. In addition, the developing world may present its own unique challenges. There are fewer trained people, and the best people tend to be concentrated in capitals rather than in field posts; equipment for monitoring and data gathering is scarce, and basic data are unreliable.

None of these factors seem to have discouraged advocates for economic instruments. Unfortunately, by arguing, as Harvard University economist Theodore Panayotou has, that market-based instruments “in effect transfer [important responsibilities] from bureaucrats to the market,” some of the literature has suggested that capacity limitations are a reason for, rather than an obstacle to, the adoption of market-based instruments.

Proponents of economic incentives for developing-country environmental management usually start with the appealing proposition that market-based instruments relax the trade-off between the goals of economic growth and environmental quality. They argue that in the short run, the instruments offer the cheapest solutions and ones that can be achieved without specific knowledge of the technology or pollution-reduction costs of polluting sources. At the same time, the instruments will produce revenue for chronically poor governments. Their final argument is that incentive-based approaches will spur technological advances that, in the long run, make it cheaper to reach better environmental quality. The cost of not adopting market-based policy instruments rests on the entire argument.

Cheapest now. The proponents assume that if each source faces the same price per unit of discharge, either as charge or as the price of a tradable permit, the total cost of meeting some given air or water quality standard will be minimized. For many, although not all, situations, however, the location of each source matters to the environmental results, as many economists have noted. Where location does matter, it is necessary to tailor charges to each source’s location and take account of each source’s costs to achieve “cheapest-now”solutions: not an easy matter in the developing world. To do this with a tradable permit system would require a special kind of permit that grants the right to change the quality at particular points in the region rather than to discharge a particular quantity of pollutants at the source’s location. This is a complex business and has never been tried as a real policy.

No information needed. If it were true that a single-price system produced cheapest-now, it is conceivable that the required price could be found via trial and error. But in the world in which location matters, a trial and error approach is not even conceivable if there are more than a couple of sources. To make matters worse, however, there will almost always be numerous sets of individualized sets of charges that produce the desired environmental quality. Finding any one of these is not the same as finding the cheapest-now set. The responsible regulatory body would have to check out a great many such successes by adding up the costs incurred by the sources in order to know when it was closest to cheapest-now.

Only the highest functioning countries should attempt the most difficult economic approaches such as tradable permits.

The alternative is to find the cheapest-now prices by using mathematical modeling. The model would have to include information about source location and costs of discharge reduction, and to fit each source within a representation of the regional environment. The complexity and data requirements of the effort involved would present a formidable challenge for underfunded, poorly staffed environmental ministries of the developing world..

New sources of revenue. Pollution charges and auctioned permits generate income, an appealing idea for governments needing tax revenue. But even where tax revenues are not in short supply, there is a policy argument for taxing pollution.

A simplified version of this is that it is, in principle, better to tax activities that are undesirable, rather than tax labor or “goods.” Taxes on activities that government should be encouraging distort the market outcome from what would be socially optimal in the absence of the tax; too little of the good is made or too little labor is offered. Pollution is something that society wants to discourage, and reducing it below the free-market amount is what pollution-control policy is all about. The revenue is in that sense “free,” and everyone is better off if society substitutes it for other taxes, at the same time as it reduces discharges.

There is no question that successful emission charges or periodically auctioned and tradable permits to emit pollution will produce revenue for the responsible agency or for the entire government at the same time that they influence pollution discharges. But the revenue must be put into perspective. First, exactly because pollution discharge levels can be adjusted in response to these incentives, the relation between those levels and the revenue will be complex. There is no reason to expect that the charge or auctioned level chosen to produce maximum revenue is easy to find or is the same as a level that allows ambient standards to be met at lowest cost. Second, it is unlikely that very much revenue can be raised this way. Calculations based on figures from Sweden’s carbon tax suggest that raising more than 1 to 2 percent of a government’s budget requirements is most unlikely. Although this is not insignificant, it is also not a major source of revenue. Third, if the charge has its expected effect of changing behavior in the short run and encouraging a shift toward less-polluting technology in the longer run, revenue will decline over time. Fourth, there is nothing easy about collecting this revenue. If a country has trouble collecting sales and income taxes because it has difficulty monitoring sales or wages or because of corruption, there is no obvious reason why it will find it easier to monitor emissions and collect the appropriate taxes or enforce the actual purchased permits. Indeed, the record keeping necessary for ensuring that taxes work is probably much easier for sales and income, because they can be audited against a paper trail. Pollution discharges generally must be measured by special equipment as the pollution is created, and this monitoring capability does not exist in much of the developing world. In view of these differences, pollution “tax” revenue is likely harder to obtain.

The final concern here is political reality. Instituting taxes, particularly those that bite, often requires significant expenditures of political capital, as demonstrated by the U.S. experience with proposed energy tax increases. Environmental policymakers must ask whether the governments in developing countries and the countries in transition, facing steep unemployment and weak industries, will undertake this act of courage, particularly when they rank environmental issues very low among their priorities. Poland reduced its pollution charges in the mid-1990s in response to industry protest when the charges began to rise to a level that might have changed behavior.

Even cheaper in the future. The extra cost imposed by charges or auctioned permits provides a continuing incentive for industry to seek out and adopt new, less-polluting technology to avoid paying some charge amount or purchasing permits. In contrast, traditional nonmarketable permits to discharge are said to lack the incentive to innovate. But saving money is still desirable so long as the cost of achieving the saving is less than the saving. The difference is that there is no incentive to reduce discharges below the level set in the permit.

In any case, the argument assumes that governments will be willing to impose and actually collect charges significant enough to change behavior and that there will be adequate monitoring, political will, and consistent timely collections not eroded by inflation. It does not consider the many countries that insulate firms from the market place with the equivalent of soft budget constraints or who use their banks for loans that support friends or government objectives, irrespective of sound business principles or sober assessment of credit. All these must be assumed away to make the theory plausible.

Other arguments. A few experts argue that using economic instruments can reduce or eliminate the need for regulatory bodies and enforcement programs by, as Harvard’s Panayotou argues, taking full advantage of the self-interest and superior information of producers and consumers “without requiring the disclosure of such information or creating large and costly bureaucracies.”

Comparing economic instruments as a group, he argues that they substitute for efforts to force compliance through enforcement and that they “tend to have lower institutional and human resource requirements than command and control regulations.” Developing countries cannot afford the “generous” infusion of “resources such as capital, government revenue, management skills, and administrative and enforcement capabilities” that are demanded by command and control requirements, but not by market-based instruments.

This argument flies in the face of the empirical evidence of the U.S. experience and the complications and qualifications discussed above. Despite the fact that Panayotou acknowledges that “the informational requirements of economic instruments are not insignificant,” the proponents of market-based incentives in the developing and transitioning world are not telling the entire story. When the details are taken into account, much less is given up by postponing the drive for economic incentives than is claimed.

Songs of experience

Many countries that have been the target of development assistance have one or more versions of market-based instruments on their books, and most of the communist bloc countries, as noted earlier, used pollution charges as a primary tool of environmental protection. We have not seen any convincing evidence that these policies have changed behavior or achieved their environmental goals.

A number of country- or region-specific reports claim to portray “ground truth,” but they rarely grapple seriously with the institutional issues surrounding the introduction of market-based instruments or discuss results and impediments. At best, there are brief allusions to “difficult” enforcement problems, as in many of the Environment Discussion Papers that came out of Harvard Institute for International Development’s Agency for International Development-funded Newly Independent States Environmental Economics and Policy Project.

The few studies that go more deeply into a single country’s experience sometimes do reveal the difficulties of applying environmental instruments, acknowledging, for example, inadequate monitoring and the problem of corruption. An example from University of Windsor geographer V. Chris Lakhan in the Electronic Green Journal documents Guyana’s experience: “The inherited legacy of environmental problems and current environmental abuses will not disappear with the mere [passing of laws and development of plans]. Success in environmental protection…will depend on addressing…administrative neglect and unethical practices, fragmentation of environmental institutions, shortages of professional and technical environmental personnel, paucity of financial resources, and the uncontrolled development practices of local and foreign investors.”

But more often, as exemplified by reports on Colombia’s experiment with charges on water pollutants, such studies resemble advocacy for the recommended programs rather than disinterested evaluation.

Donors and advisors should encourage development of credible behavioral rules, verification mechanisms, and a culture of compliance.

In sum, it is almost impossible to evaluate the actual experience of developing countries with market-based environmental policy instruments. These gaps in understanding are, unfortunately, not unusual. There are, similarly, only limited reports on the success or failure of other environmental development efforts, such as the promotion of National Environmental Action Plans, basic environmental law drafting, and various efforts to introduce technology.

There is no universally right choice of instrument for managing a nation’s environment. All policy instruments require monitoring capability, enforcement resolve, and control of corruption. Unfortunately, no single instrument provides a magic way around these concerns. More fundamentally, even the cheapest way of meeting some targets may be too big a commitment for some countries. Reaching environmental targets requires a politically tough collective decision to impose costs on the influential few for the benefit of the faceless many and to stick with the decision for a sustained period. It is therefore not surprising to see uneven implementation and slippage from time to time.

Our most urgent suggestion is that in this difficult situation, policy selection should not be a function of fad or ideology. The donors, advisors, and the countries themselves must invest energy in setting realistic targets and putting into place procedures that make some steady progress toward the ultimate goal. They must increase their attention to the importance of institutional reform and develop better understandings of the conditions in each country that influence the performance of specific policy instruments.

This suggests other ways of proceeding that are more consistent with scarce institutional resources and might promise some environmental returns that could in time become significant. One approach would be to emphasize incremental improvements in pursuit of pragmatic goals, particularly ones that help to build a transitional system that will take account of existing capabilities and institutions. Taking more measured steps does not have the same sense of adventure as a great environmental leap forward, but it might result in real, although small, initial environmental gains and could be accomplished without losing sight of the ultimate goal of developing the most efficient ways to manage the environment.

A concrete way to way to think about this would be a tiered approach. Countries with the lowest institutional capability level might start with simple discharge-control technology requirements, which are hard enough when experience and funding are lacking. The basis for selection would be to ask what is achievable and relatively easy to monitor. Ideally, success will breed regulatory confidence and more success.

Countries with a bit of experience under their belts could move to technology-based discharge limitations similar to those found in the U.S. Clean Water Act. They might establish discharge standards, such as plume opacity, which can be easily monitored, or put in place deposit-refund systems, not only for beverage containers but also for car batteries, tires, and dry-cleaning fluid. Only the highest functioning countries should attempt what we consider the most difficult of the economic instruments: making discharge permits tradable or charging per unit of pollution discharged.

The most important thing the donors and advisors can do is to encourage the development of credible behavioral rules, mechanisms for verifying and encouraging compliance, and a culture in which compliance is the first choice of action rather than the last.

Institutional capacity should not be an eternal barrier. Regulatory capacity and confidence can be developed in a number of ways. Institutions, like people, must practice to learn, and environmental policy is a particularly good practice ground because clean air and clean water are something most societies want. But setting the standard for success too high defeats confidence and confuses common sense. We believe this is what has happened with the effort to move the countries of the developing world directly to sophisticated market-based instruments for environmental protection.

From the Hill – Spring 2002

Federal R&D in FY 2002 will have biggest percentage gain in 20 years

Federal R&D spending will rise to $103.7 billion in fiscal year (FY) 2002, a $12.3 billion or 13.5 percent increase over FY 2001. It is the largest percentage increase in nearly 20 years (see table).

In addition, in response to the September 11, 2001, terrorist attacks and the subsequent anthrax attacks, Congress and President Bush approved $1.5 billion for terrorism-related R&D, nearly triple the FY 2001 level of $579 million. The president had originally proposed $555 million. About half the money comes from regular appropriations and half from emergency funding approved after the attacks.

All the major R&D agencies will benefit from the significant spending boost, in contrast to the proposed cuts for most agencies in the administration’s initial budget request. The biggest increases go to the two largest R&D funding agencies: the Department of Defense (DOD) and the National Institutes of Health (NIH). DOD R&D will increase $7.4 billion or 17.3 percent to $50.1 billion, largely because of a 66.4 percent increase, to $7 billion, for ballistic missile defense R&D. Basic research will increase 5 percent to $1.4 billion; applied research by 14.6 percent to $4.2 billion.

NIH R&D will increase 15.8 percent to $22.8 billion to fulfill the fourth year of Congress’s five-year campaign to double the agency’s budget. Every institute will receive an increase greater than 12 percent, and five will receive increases greater than 20 percent. NIH counterterrorism R&D will jump from $50 million to $293 million, including $155 million in emergency appropriations for construction of a biosafety laboratory and for bioterrorism R&D.

The total federal investment in basic and applied research will increase 11 percent or $4.8 billion to $48.2 billion. NIH remains the largest single sponsor of basic and applied research; in FY 2002, NIH will fund 46 percent of all federal support of research in these areas.

Nondefense R&D will rise by $4.6 billion or 10.3 percent to $49.8 billion, the sixth year in a row that it has increased in inflation-adjusted terms. Because a large part of recent increases stems from steady growth in the NIH budget, NIH R&D has become nearly as large as all other nondefense agencies’ R&D funding combined. Funding for nondefense R&D excluding NIH has stagnated in recent years. After steady growth in the 1980s, funding peaked in FY 1994 and then declined sharply. The FY 2002 increases for non-NIH agencies, although large, just barely bring these agencies back to the funding levels of the early 1990s.

The following is a breakdown of appropriations for other R&D funding agencies.

In the Department of Health and Human Services, the Centers for Disease Control and Prevention (CDC) will receive a 33.3 percent increase to $689 million for its R&D programs. Its counterterrorism R&D funding will climb to $130 million, up from $37 million in FY 2001. The CDC also received more than $1 billion in emergency funding.

The National Aeronautics and Space Administration’s (NASA’s) total budget of $14.9 billion in FY 2002 represents a 4.5 percent increase over FY 2001. Total NASA R&D, which excludes the Space Shuttle and its mission support costs, will increase 3.8 percent to $10.3 billion. The troubled International Space Station, now projected to run more than $4 billion over budget during the next five years, will receive $1.7 billion, an 18.4 percent cut.

The Department of Energy (DOE) will receive $8.1 billion, which is $378 million or 4.9 percent more than in FY 2001. R&D in DOE’s three mission areas of energy, science, and defense will all rise, with small increases for energy R&D (up 1.6 percent) and science R&D (up 2.1 percent) and a larger increase for defense R&D (up 8.4 percent), which partially reflects emergency appropriations for counterterrorism R&D. DOE received a large increase for its programs to combat potential nuclear terrorism.

National Science Foundation (NSF) R&D funding will rise by 7.6 percent to $3.5 billion. Most research directorates will receive increases greater than 8 percent, compared to level or declining funding in the president’s request. The largest increases, however, will go to NSF’s non-R&D programs in education and human resources for a new math and science education partnerships program. The final budget also boosts funding for information technology and nanotechnology research.

The U.S. Department of Agriculture (USDA) will receive a large budget boost from emergency funds to combat terrorism. USDA R&D will total $2.1 billion in FY 2002, a boost of $180 million or 9.2 percent. USDA’s intramural Agricultural Research Service (ARS) will receive $40 million in emergency funds for research on food safety and potential terrorist threats to the food supply and $73 million in R&D facilities funds to improve security at two laboratories that handle pathogens.

The Department of Commerce’s R&D programs will receive $1.4 billion, which is $153 million or 12.7 percent more than in FY 2001. Commerce’s two major R&D agencies, the National Institute of Standards and Technology (NIST) and the National Oceanic and Atmospheric Administration (NOAA), will both receive large increases. NOAA R&D will rise by 15.3 percent to $836 million. NIST’s Advanced Technology Program will get a 26.6 percent boost to $150 million, despite the desire of the administration and the House to all but eliminate the program. Total NIST R&D will increase 17.1 percent to $493 million.

The Department of the Interior’s R&D budget totals $673 million in FY 2002, an increase of 6.5 percent. Although the president’s FY 2002 request caused alarm in the science and engineering community because of its proposed cut of nearly 11 percent for R&D in the U.S. Geological Survey (USGS), the final budget restores the cuts and gives USGS an increase of 3.1 percent over FY 2001 to $567 million.

The Environmental Protection Agency FY 2002 R&D budget will increase to $702 million, up $93 million or 15.3 percent from last year. The boost is due to $70 million in emergency counterterrorism R&D funds, including money for drinking water vulnerability assessments and anthrax decontamination work. The nonemergency funds for most R&D programs will remain at the FY 2001 level, though nearly 50 congressionally designated research projects were added to the Science and Technology account and nearly 20 earmarked R&D projects were added to other accounts.

Department of Transportation R&D will climb to $853 million in 2002, which is $106 million or 14.2 percent more than FY 2001. The Federal Aviation Administration (FAA) will receive $50 million in emergency counterterrorism funds to develop better aviation security technologies. The FAA will receive a total of $373 million for R&D, a gain of 23.9 percent because of the emergency funds and also because of guarantees of increased funding for FAA programs that became law last year.t student visitors and using innovative technologies to enforce immigration policies.

R&D in the FY 2003 Budget by Agency
(budget authority in millions of dollars)

  FY 2001
Actual
FY 2002
Estimate
FY 2003
Budget
Change FY 02-03
Amount Percent
Total R&D (Conduct and Facilities)
Defense (military) 42,235 49,171 54,544 5,373 10.9%
  S&T (6.1-6.3 + medical) 8,933 9,877 9,677 -200 -2.0%
  All Other DOD R&D 33,302 39,294 44,867 5,573 14.2%
Health and Human Services 21,037 23,938 27,683 3,745 15.6%
  Nat’l Institutes of Health 19,737 22,539 26,472 3,933 17.4%
NASA 9,675 9,560 10,069 509 5.3%
Energy 7,772 9,253 8,510 -743 -8.0%
  NNSA and other defense 3,414 4,638 4,010 -628 -13.5%
  Energy and Science programs 4,358 4,615 4,500 -115 -2.5%
Nat’l Science Foundation 3,363 3,571 3,700 129 3.6%
Agriculture 2,182 2,336 2,118 -218 -9.3%
Commerce 1,054 1,129 1,114 -15 -1.3%
  NOAA 586 644 630 -14 -2.2%
  NIST 412 460 472 12 2.6%
Interior 622 660 628 -32 -4.8%
Transportation 792 867 725 -142 -16.4%
Environ. Protection Agency 598 612 650 38 6.2%
Veterans Affairs 748 796 846 50 6.3%
Education 264 268 311 43 16.0%
All Other 922 1,021 858 -163 -16.0%
  Total R&D 91,264 103,182 111,756 8,574 8.3%
Defense R&D 45,649 53,809 58,554 4,745 8.8%
Nondefense R&D 45,615 49,373 53,202 3,829 7.8%
  Nondefense R&D excluding NIH 25,878 26,834 26,730 -104 -0.4%
Basic Research 21,330 23,542 25,545 2,003 8.5%
Applied Research 21,960 24,082 26,290 2,208 9.2%
Development 43,230 50,960 55,520 4,560 8.9%
R&D Facilities and Equipment 4,744 4,598 4,401 -197 -4.3%

Source: AAAS, based on OMB data for R&D for FY 2003, agency budget justifications, and information from agency budget offices.

Bush FY 2003 R&D budget increases would go mostly to DOD, NIH

On February 4, the Bush administration released its fiscal year (FY) 2003 budget request containing a record $111.8 billion for R&D. But in a repeat of last year’s request, nearly the entire increase would go to the Department of Defense (DOD) and the National Institutes of Health (NIH).

There are no clear patterns in the mix of increases and decreases for the remaining R&D funding agencies. Unlike last year, the FY 2003 budget would see increases and decreases scattered even within R&D portfolios, as agencies try to prioritize in an environment of scarce resources. Some cuts stem from the administration’s campaign to eliminate congressional earmarks, which reached $1.5 billion in FY 2002. Cuts in some agencies are due to efforts to return to normal funding levels from FY 2002 totals inflated by post-September 11 counterterrorism appropriations. However, spending on counterterrorism activities would remain robust, particularly in the areas of public health infrastructure, emergency response networks, and basic health-related research.

In sharp contrast to the financial optimism of last year’s budget, when economists forecasted endless surpluses, the FY 2003 budget proposes deficit spending. With President Bush taking the lead in preparing the public for budget deficits for the next few years, the most likely outcome is that Congress will spend whatever it feels it needs in order to adequately fund defense, domestic programs, homeland security, and other priorities.

For federal R&D programs, the only thing certain is that NIH will eventually receive its requested $27.3 billion and perhaps even more. In an election year, the pressures on Congress to add more money will be even greater than last year. Combined with the continuing crisis atmosphere surrounding matters related to war and security and the near-disappearance of budget balancing as a constraint, the president’s budget will almost certainly be a floor rather than a ceiling for the R&D appropriations action to come.

NIH would receive $27.3 billion for its total budget, an increase of $3.7 billion (15.7 percent) that would fulfill the congressional commitment to double the budget in five years. Of that, about $1.8 billion would go for antibioterrorism efforts, including basic research, drug procurement ($250 million for an anthrax vaccine stockpile), and improvements in physical security.

NIH R&D would rise 17.4 percent to $26.5 billion. The big winner would be the National Institute of Allergy and Infectious Diseases (NIAID), which would receive a boost of 57.3 percent to $4 billion as NIH’s lead institute for basic bioterrorism R&D. NIAID is also the lead NIH institute for AIDS research, which would increase 10 percent to $2.8 billion. Cancer research is another high priority, with a request of $5.5 billion, of which $4.7 billion would go to the National Cancer Institute. Buildings and Facilities would nearly double to $633 million over an FY 2002 total already inflated by emergency counterterrorism spending. The money would be used to further improve laboratory security, build new bioterrorism research facilities, and finish construction of NIH’s new Neuroscience Research Center. Most of the other institutes would receive increases between 8 and 9 percent.

DOD R&D would rise to $54.6 billion, an increase of $5.4 billion or 10.9 percent. However, most of this increase would go to the development of weapons systems rather than to research. The DOD science and technology account, which includes basic and applied research plus generic technology development, would fall 2 percent to $9.7 billion. After a near doubling of its budget in FY 2002, the Ballistic Missile Defense Organization would see its R&D budget decline slightly to $6.7 billion, which would still be more than 50 percent above the FY 2001 funding level. The Defense Advanced Research Projects Agency would be a big winner, with a proposed 19.2 percent increase to $2.7 billion.

The National Science Foundation (NSF) budget would rise by 5 percent to $5 billion. Excluding non-R&D education activities, NSF R&D would be $3.7 billion, up $129 million or 3.6 percent. $76 million of the increase, however, would be accounted for by the transfer of the National Sea Grant program from the Department of Commerce, hydrologic sciences from the Department of the Interior, and environmental education from the Environmental Protection Agency (EPA). Although mathematical sciences would receive a 20 percent increase to $182 million, other programs in Mathematical and Physical Sciences, such as chemistry, physics, and astronomy, would decline. Another big winner would be Information Technology Research (up 9.9 percent), though at the expense of other computer sciences research. The budget for the administration’s high-priority Math and Science Partnerships would increase from $160 million to $200 million, but most other education and human resources programs would be cut.

The National Aeronautics and Space Administration (NASA) would see its total budget increase by 1.4 percent to $15.1 billion in FY 2003, but NASA’s R&D (two-thirds of the agency’s budget) would climb 5.3 percent to $10.1 billion. In an attempt to reign in the over-budget and much-delayed International Space Station, only $1.5 billion is being requested for further construction, down from $1.7 billion. The Science, Aeronautics and Technology R&D accounts would climb 10.3 percent to $8.9 billion. Space Science funding would increase 13 percent to $3.4 billion, though the administration would cancel missions to Pluto and Europa. Funding for the Biological and Physical Research program, which was greatly expanded last year to take on all Space Station research, would rise 2.8 percent to $851 million. Aero-Space Technology would climb 11.7 percent to $2.9 billion, including $759 million (up 63 percent) for the Space Launch Initiative, which is developing new technologies to replace the shuttle. The NASA request would eliminate most R&D earmarks added on to the FY 2002 budget, resulting in a nearly 50 percent cut to Academic Programs, a perennial home to congressional earmarks.

The Department of Energy (DOE) would see its R&D fall 8 percent to $8.5 billion from an FY 2002 total inflated with one-time emergency counterterrorism R&D funds. Funding for the Office of Science would remain flat at $3.3 billion, but most programs would receive increases, offset by cuts in R&D earmarks and a planned reduction in construction funds for the Spallation Neutron Source. Although overall funding for Solar and Renewables R&D would remain level, the program emphasis would shift toward hydrogen, hydropower, and wind research. Fossil Energy R&D would receive steep cuts of up to 50 percent on natural gas and petroleum technologies. In Energy Conservation, DOE would replace the Partnership for a New Generation of Vehicles with FreedomCAR, a collaborative effort with industry to develop hydrogen-powered fuel cell vehicles. DOE’s defense R&D programs would fall 13.5 percent to $4 billion because the FY 2002 total is inflated with one-time counterterrorism emergency funds. However, defense programs in advanced scientific computing R&D and stockpile stewardship R&D would receive increases.

R&D in the U.S. Department of Agriculture (USDA) would decline $218 million or 9.3 percent to $2.1 billion, mostly because of proposed cuts to R&D earmarks and the loss of one-time FY 2002 emergency antiterrorism funds. Funding for competitive research grants in the National Research Initiative (NRI) would double from $120 million to $240 million, offsetting steep cuts in earmarked Special Research Grants from $103 million to $7 million. The large NRI increase would partially make up for the administration’s decision to block a $120-million mandatory competitive research grants program from spending any money in FY 2003. In the intramural Agricultural Research Service (ARS) programs, Buildings and Facilities funding would fall from $119 million to $17 million because FY 2002 emergency antiterrorism security upgrades have been made and because congressionally earmarked construction projects would not be renewed. ARS research would decrease by $30 million to $1 billion, but selected priority research programs would receive increases, offset by the cancellation of R&D earmarks.

Department of Commerce R&D programs would decline 1.3 percent to $1.1 billion. Once again the administration has requested steep reductions in the Advanced Technology Program at the National Institute of Standards and Technology. National Oceanic and Atmospheric Administration (NOAA) R&D would decline by 2.2 percent or $14 million due to the proposed transfer of the $62 million National Sea Grant program to NSF. Overall, NOAA R&D programs would increase.

R&D in the Department of the Interior would decline 4.8 percent to $628 million, but steeper cuts would fall on Interior’s lead science agency, the U.S. Geological Survey (USGS). USGS R&D would decrease 7 percent or $41 million to $542 million. Hardest hit would be the National Water Quality Assessment Program and the Toxic Substances Hydrology Program, including a $10 million transfer to NSF to initiate a competitive grants process to address water quality issues.

The EPA R&D budget would rise 6.2 percent to $650 million in FY 2003. Much of this increase is due to $77.5 million proposed for research in dealing with biological and chemical incidents.

New program for math and science teachers receives little funding

After nearly a year of negotiations, Congress enacted a sweeping reform law for federal K-12 education programs in December 2001 that included the creation of a new program for math and science teachers. However, the appropriations bill that provides funding for federal education programs has left it with little money.

The education law, signed by President Bush in January, creates a broad “Teacher Quality” program, which will provide grants to states for a wide array of purposes relating to teacher quality, including professional development. It also creates a program aimed specifically at improving math and science education. The program will establish partnerships between state and local education agencies and higher education institutions for bolstering the professional development of math and science teachers. It also includes several other types of activities to improve math and science teaching.

The new program replaces the Eisenhower Professional Development program, which provided opportunities for K-12 teachers to expand their knowledge and expertise. In fiscal 2001, the Eisenhower program received $485 million, $250 million of which was set aside for programs aimed at math and science teachers.

The new science and math program was strongly supported by the scientific, education, and business communities, which argue that the scientific literacy of the nation’s workforce is essential to national security and economic prosperity. Proponents point to the labor shortage that has existed in the high-tech sector in recent years and the prevalence of foreign students in U.S. graduate programs as evidence that U.S. math and science education programs need to be improved.

However, the fiscal 2002 appropriations bill that includes education spending allocated $2.85 billion for the broad teacher quality initiative but just $12.5 million for the math and science partnerships, far short of the $450 million authorized by the education reform law.

The conference report on the appropriations bill acknowledges that good math and science education “is of critical importance to our nation’s future competitiveness,” and agrees that “math and science professional development opportunities should be expanded,” but relies on the states to fund such programs within the teacher quality program. “The conferees strongly urge the Secretary [of Education] and States to utilize funding provided by the Teacher Quality Grant program, as well as other programs funded by the federal government, to strengthen math and science education programs across the nation,” the report states.

A similar program has also been created within the National Science Foundation (NSF), as proposed in the president’s original reform proposal, and was provided with $160 million for the current year. However, the NSF grants will be distributed through a nationwide competition and are not likely to achieve the balance or scope of the $450 million program envisioned by the authors of the reform law.

Also included in NSF’s fiscal year 2002 budget are two pilot education programs funded at $5 million apiece. One is based on legislation sponsored by Sen. Joseph I. Lieberman (D-Conn.), which would provide grants to colleges and universities that pledge to increase the number of math, science, and engineering majors that graduate. The other program, based on a proposal by Rep. Sherwood L. Boehlert (R-N.Y.), will provide scholarships to undergraduate students majoring in math, science, or engineering who pledge to teach for two years after their graduation.

Congress considers additional antiterrorism legislation

The House and Senate have passed or are considering additional counterterrorism legislation in the aftermath of last year’s attacks.

In December 2001, the House and Senate both passed bills (H.R. 3448 and S. 1765) that would improve bioterrorism preparedness at state and federal levels, encourage the development of new vaccines and other treatments, and tighten federal oversight of food production and use of dangerous biological agents. Because the bills are similar, resolution of the differences between the two was expected as early as March.

Both bills would grant the states about $1 billion for bioterrorism preparedness; both would spend approximately $1.2 billion on building up the nation’s stockpile of vaccines ($509 million for smallpox vaccine alone) and other emergency medical supplies; and both would increase the federal government’s ability to monitor and control dangerous biological agents and to mount a rapid coordinated response to a bioterrorist attack.

One of the few substantive differences between the bills concerns food and water safety. The Senate version provides more than $520 million to improve food safety and protect U.S. agriculture from bioterrorism. The House version, however, provides only $100 million, focusing instead on funding for water safety ($170 million).

There are also some discrepancies in the amount of money allocated to specific programs. The Senate bill authorizes only $120 million for laboratory security and emergency preparedness at the Centers for Disease Control and Prevention, whereas the House bill provides $450 million.

On February 7, the House passed the Cyber Security Research and Development Act (H.R.3394) by a vote of 400 to 12. The bill would authorize $877 million in funds within the National Science Foundation (NSF) and the National Institutes of Standards and Technology (NIST). The funding will go toward an array of programs to improve basic research in computer security, encourage partnerships between industry and academia, and help generate a new cybersecurity workforce.

House Science Committee Chairman Sherwood Boehlert (R-N.Y.) introduced the bill in the aftermath of the terrorist attacks. “The attacks of September 11th have turned our attention to the nation’s weaknesses, and again we find that our capacity to conduct research and to educate will have to be enhanced if we are to counter our foes over the long run,” Boehlert said. The bill’s cosponsor and the committee’s ranking member Rep. Ralph Hall (D-Tex.) stated that, “The key to ensure information security for the long term is to establish a vigorous and creative basic research effort.”

The bill authorizes $568 million between fiscal years (FYs) 2003 and 2007 to NSF, of which $233 million would go for basic research grants; $144 million for the establishment of multidisciplinary computer and network security research centers; $95 million for capacity-building grants to establish or improve undergraduate and graduate education programs; and $90 million for doctoral programs.

NIST would receive almost $310 million over the same five-year period, of which $275 million would go toward research programs that involve a partnership between industry, academia, and government laboratories. In addition, funding may go toward postdoctoral research fellowships. The bill provides $32 million for intramural research conducted at NIST laboratories. The bill also proposes spending $2.15 million for NIST’s Computer System Security and Privacy Advisory Board to conduct analyses of emerging security and research needs and $700,000 for a two-year study of the nation’s infrastructure by the National Research Council.

Congress continues to debate other measures that could improve the nation’s preparedness against terrorist attacks. On February 5, at a hearing of the Senate Subcommittee on Science, Technology and Space, Chair Ron Wyden (D-Ore.) discussed a bill that would create what he called a “National Emergency Technology Guard,” a cadre of volunteers that could be called upon in case of a terrorist attack or other emergency. Wyden also advocated creating a central clearinghouse for information about government funding for bioterrorism R&D, as well as local registries of resources, such as hospital beds, medical supplies, and antiterrorism experts, that would speed response to a bioterror attack.

According to witnesses at the hearing, both the private and academic sectors have had difficulty working with the federal government to protect the United States from bioterrorism. The main challenge faced by small companies trying to develop antiterrorism technologies is the lack of funding for products that may not have immediate market value, said John Edwards, CEO of Photonic Sensor, and Una Ryan, CEO of AVANT Immunotherapeutics and a representative of the Biotechnology Industry Organization. They testified in favor of the kind of central clearinghouse recommended by Wyden, which they argued would speed the development of antibioterrorism technologies.

Along similar lines, Bruno Sobral, director of the Virginia Bioinformatics Institute, suggested that a government-sponsored central database of bioterrorism-related information would facilitate coordination among academic researchers, who otherwise might fail to identify crucial gaps in knowledge about dangerous pathogens.

Proposal for comprehensive cloning ban debated

The Senate was expected to vote in early spring on a proposal, already approved in the House, for a comprehensive ban on human cloning. A bruising fight was expected. Since Congress reconvened in January, two Senate committees have held hearings on the issue, and outlines of the debate have taken both a familiar and a unique shape.

On one side are proponents of a bill (S.1899) sponsored by Sens. Sam Brownback (R-Kan.) and Mary Landrieu (D-La.) that is identical to a bill approved by the House in the summer of 2001 (H.R.2505). The bill would ban all forms of human cloning, whether for producing a human baby (reproductive cloning) or for scientific research (research cloning). On the other side are proponents of a narrower cloning ban that would prohibit reproductive cloning but permit research cloning. Two such narrow bans have been introduced, one by Sens. Tom Harkin (D-Iowa) and Arlen Specter (R-Penn.) and the other by Sens. Dianne Feinstein (D-Calif.) and Edward M. Kennedy (D-Mass).

Supporting the Brownback-Landrieu bill is an unusual coalition of religious conservatives and environmentalists. Religious conservatives argue that human embryos should be afforded a moral status similar to human beings and should not be destroyed even in the course of scientific research. Environmentalists argue that permitting research cloning would open the door to reproductive cloning and that such research should not proceed until strict regulatory safeguards are implemented.

Opposing the Brownback-Landrieu bill is a coalition of science organizations, patient groups, and the biotechnology industry, which argue that research cloning could potentially lead to cures for many diseases, that reproductive cloning can be stopped without banning research, and that criminalizing scientific research sets a bad precedent.

At the first of the two Senate hearings, the Senate Appropriations Committee’s Labor-Health and Human Services (HHS) Subcommittee heard from Irving L. Weissman, who chaired a National Research Council panel on reproductive cloning. He cited a low success rate in animal cloning and abnormalities in cloned animals that survive as reasons for a ban on human reproductive cloning. However, he testified that there is evidence that stem cells derived from cloned embryos are functional.

“Scientists place high value on the freedom of inquiry–a freedom that underlies all forms of scientific and medical research,” Weissman said. “Recommending restriction of research is a serious matter, and the reasons for such a restriction must be compelling. In the case of human reproductive cloning, we are convinced that the potential dangers to the implanted fetus, to the newborn, and to the woman carrying the fetus constitute just such compelling reasons. In contrast, there are no scientific or medical reasons to ban nuclear transplantation to produce stem cells, and such a ban would certainly close avenues of promising scientific and medical research.”

Brent Blackwelder, president of Friends of the Earth, laid out the environmental community’s case against human cloning. He argued that cloning and the possible advent of inheritable genetic modifications (changes to a person’s genetic makeup that can be passed on to future generations) “violate two cornerstone principles of the modern conservation movement: 1) respect for nature and 2) the precautionary principle.” He described these potential developments as “biological pollution,” a new kind of pollution “more ominous possibly than chemical or nuclear pollution.”

Blackwelder advocated a moratorium on research cloning in order to prevent reproductive cloning from taking place. “Even though many in the biotechnology business assert that their goal is only curing disease and saving lives,” he said, “the fact remains that once these cloning and germline technologies are perfected, there are plenty who have publicly avowed to utilize them.”

Although Blackwelder described the Feinstein-Kennedy bill as “Swiss cheese,” Specter, the ranking member of the Labor-HHS subcommittee, vowed to erect a strong barrier between research and reproductive cloning. “We’re going to put up a wall like Jefferson’s wall between church and state,” he said.

The second hearing, held by the Senate Judiciary Committee, featured testimony from Rep. Dave Weldon (R-Fla.), who shepherded the cloning ban through the House. Weldon addressed the moral status of a human embryo, describing the “great peril of allowing the creation of human embryos, cloned or not, specifically for research purpose.” He added, “Regardless of the issue of personhood, nascent human life has some value.

Among those testifying in favor of the Feinstein-Kennedy bill was Henry T. Greely, a Stanford law professor representing the California Advisory Committee on Human Cloning, which released a report in January 2002 entitled, Cloning Californians? The report, which was mandated by a 1997 state law imposing a temporary ban on reproductive cloning, unanimously recommended a continued ban on reproductive cloning but not on research cloning.

“Government should not allow human cloning to be used to make people,” Greeley said. “It should allow with due care human cloning research to proceed to find ways to relieve diseases and conditions that cause suffering to existing people.”

Future is cloudy for Space Station as new NASA chief takes helm

In a move that throws doubt on the future of the International Space Station (ISS), President Bush has appointed Sean O’Keefe, formerly deputy director of the Office of Management and Budget (OMB), to be the new administrator of the National Aeronautics and Space Administration (NASA). He replaces longtime administrator Daniel Golden. The Senate confirmed the nomination on December 20.

The appointment was announced just a week after O’Keefe appeared at a November 7 House Science Committee hearing to defend a report criticizing the Space Station’s financial management. He came under fire from some committee members for saying that NASA should focus its current efforts on maintaining a three-person crew on the station and not expanding its capacity to the seven- member crew originally envisioned for the ISS.

At his Senate confirmation hearing, O’Keefe received unanimous support from members of the Commerce Committee’s Subcommittee on Science, Technology, and Space, but the concerns expressed by the House Science Committee members were echoed loudly by Sens. Bill Nelson (D-Fla.) and Kay Bailey Hutchison (R-Tex.). Both hail from states that are home to NASA centers critical to the Space Station program.

Debate over ISS has heated up since NASA announced in the spring of 2001 that the project, which was already several years behind schedule and billions of dollars over budget, was facing another $4 billion cost overrun. In conjunction with OMB, NASA created the ISS Management and Cost Evaluation Task Force to assess the program’s financial footing. The task force, chaired by former Martin Marietta president A. Thomas Young, released a November 1, 2001, report that was the topic of the Science Committee hearing. Young testified alongside O’Keefe, who was representing OMB, and strongly endorsed the report.

The report found that “the assembly, integration, and operation of the [station’s] complex systems have been conducted with extraordinary success, proving the competency of the design and the technical team,” but that the program has suffered from “inadequate methodology, tools, and controls.” Further, the report concluded that the current program plan for fiscal years 2002-2006 was “not credible.”

The task force recommended major changes in program management and identified several areas for possible cost savings, including a reduction in shuttle flights to support the station from six to four per year. The panel also identified several steps to improve the program’s scientific research, including better representation of the scientific community within the ISS program office.

At the House hearing, O’Keefe and Young refused to endorse the seven-person crew originally planned for the station. Instead, they said NASA should produce a credible plan for achieving the “core complete” stage, which includes the three-person crew currently in place, before embarking on plans to expand. However, NASA has said that roughly 2.5 crew members are needed just to maintain the station, so with only three crew members, the time available for conducting research would be scarce. The task force confirmed that assessment.

Rep. Ralph M. Hall (D-Tex.), the ranking member of the Science Committee, said that the approach recommended by the task force “seems to me to be a prescription for keeping the program in just the sort of limbo that the task force properly decries… We should be explicit that we are committed to completing the space station with its long-planned seven-person crew capability.” A three-person ISS, he said, is not worth the money.

Some ISS partners, including Canada, Europe, Japan, and Russia, have also opposed a three-person crew, arguing that a failure to field at least a six-person crew would violate U.S. obligations under the agreements that created the ISS.

Science Committee Chair Sherwood L. Boehlert (R-N.Y.) defended the task force for arguing that, “we’re not going to buy you a Cadillac until we see that you can handle a Chevy.” In fact, nearly every member praised the panel’s efforts to help NASA control costs, if not its view of what ISS’s goals should ultimately be, but Rep. Dave Weldon (R-Fla.) criticized the proposed reduction in shuttle flights, saying it would lead to layoffs. “It looks like the administration is not a supporter of the manned space flight program,” he declared.

Language on evolution attached to education law

The conference report accompanying the education reform bill passed by Congress in December 2001 includes controversial though not legally binding language regarding the teaching of evolution.

Although Congress usually steers clear of any involvement in state and local curriculum development, the Senate in June 2001 passed a sense of the Senate amendment proposed by Sen. Rick Santorum (R-Penn.), dealing with how evolution is taught in schools. The resolution stated that, “where biological evolution is taught, the curriculum should help students to understand why this subject generates so much continuing controversy.”

Although the resolution appears uncontroversial on its face, the statement was hailed by anti-evolution groups as a major victory and criticized by scientific organizations. Proponents view it as an endorsement of the teaching of alternatives to evolution in science classes. Opponents say the resolution fails to make the crucial distinction between political and scientific controversy. Although evolution has generated a great deal of political and philosophical debate, the opponents argue, it is generally regarded by scientists as a valid and well-supported scientific theory.

In response to the resolution’s passage, a letter signed by 96 scientific and educational organizations was sent in August 2001 to Sen. Edward M. Kennedy (D-Mass.) and Rep. John Boehner (R-Ohio), the chairmen of the education conference committee, requesting removal of the language from the final bill. In an apparent compromise, the committee declined to include it as a sense of Congress resolution but added the following slightly altered language to the final conference report:

“The conferees recognize that a quality science education should prepare students to distinguish the data and testable theories of science from religious or philosophical claims that are made in the name of science. Where topics are taught that may generate controversy (such as biological evolution), the curriculum should help students to understand the full range of scientific views that exist, why such topics may generate controversy, and how scientific discoveries can profoundly affect society.”

This language has been praised by anti-evolution groups and criticized by scientists for the same reasons as the original amendment. Neither a sense of Congress resolution nor report language, however, has the force of law, so the debate has primarily symbolic importance


“From the Hill” is prepared by the Center for Science, Technology, and Congress at the American Association for the Advancement of Science (www.aaas.org/spp) in Washington, D.C., and is based on articles from the center’s bulletin Science & Technology in Congress.

Life-Saving Products from Coral Reefs

During the past decade, marine biotechnology has been applied to the areas of public health and human disease, seafood safety, development of new materials and processes, and marine ecosystem restoration and remediation. Dozens of promising products from marine organisms are being advanced, including a cancer therapy made from algae and a painkiller taken from the venom in cone snails. The antiviral drugs Ara-A and AZT and the anticancer agent Ara-C, developed from extracts of sponges found on a Caribbean reef, were among the earliest modern medicines obtained from coral reefs. Other products, such as Dolostatin 10, isolated from a sea hare found in the Indian Ocean, are under clinical trials for use in the treatment of breast and liver cancers, tumors, and leukemia. Indeed, coral reefs represent an important and as yet largely untapped source of natural products with enormous potential as pharmaceuticals, nutritional supplements, enzymes, pesticides, cosmetics, and other novel commercial products. The potential importance of coral reefs as a source of life-saving and life-enhancing products, however, is still not well understood by the public or policymakers. But it is a powerful reason for bolstering efforts to protect reefs from degradation and overexploitation and for managing them in sustainable ways.

Between 40 and 50 percent of all drugs currently in use, including many of the anti-tumor and anti-infective agents introduced during the 1980s and 1990s, have their origins in natural products. Most of these were derived from terrestrial plants, animals, and microorganisms, but marine biotechnology is rapidly expanding. After all, 80 percent of all life forms on Earth are present only in the oceans. Unique medicinal properties of coral reef organisms were recognized by Eastern cultures as early as the 14th century, and some species continue to be in high demand for traditional medicines. In China, Japan, and Taiwan, tonics and medicines derived from seahorse extracts are used to treat a wide range of ailments, including sexual disorders, respiratory and circulatory problems, kidney and liver diseases, throat infections, skin ailments, and pain. In recent decades, scientists using new methods and techniques have intensified the search for valuable chemical compounds and genetic material found in wild marine organisms for the development of new commercial products. Until recently, however, the technology needed to reach remote and deepwater reefs and to commercially develop marine biotechnology products from organisms occurring in these environments was largely inadequate.

The prospect of finding a new drug in the sea, especially among coral reef species, may be 300 to 400 times more likely than isolating one from a terrestrial ecosystem. Although terrestrial organisms exhibit great species diversity, marine organisms have greater phylogenetic diversity, including several phyla and thousands of species found nowhere else. Coral reefs are home to sessile plants and fungi similar to those found on land, but coral reefs also contain a diverse assemblage of invertebrates such as corals, tunicates, molluscs, bryozoans, sponges, and echinoderms that are absent from terrestrial ecosystems. These animals spend most of their time firmly attached to the reef and cannot escape environmental perturbations, predators, or other stressors. Many engage in a form of chemical warfare, using bioactive compounds to deter predation, fight disease, and prevent overgrowth by fouling and competing organisms. In some animals, toxins are also used to catch their prey. These compounds may be synthesized by the organism or by the endosymbiotic microorganisms that inhabit its tissues, or they are sequestered from food that they eat. Because of their unique structures or properties, these compounds may yield life-saving medicines or other important industrial and agricultural products.

Despite these potential benefits, the United States and other countries are only beginning to invest in marine biotechnology. For the past decade, Japan has been the leader, spending $900 million to $1 billion each year, about 80 percent of which comes from industry. In 1992, the U.S. government invested $44 million in marine biotechnology research, which is less than 1 percent of its total biotechnology R&D budget; an additional $25 million was invested by industry. In 1996, the latest date for which figures are available, U.S. government marine biotechnology research investment was estimated at only $55 million. Even with limited funding, U.S. marine biotechnology efforts since 1983 have resulted in more than 170 U.S. patents, with close to 100 new compounds patented between 1996 and 1999. U.S. support for marine biotechnology research is likely to increase in the coming years. According to the National Oceanic and Atmospheric Administration, marine biotechnology has become a multibillion industry worldwide, with a projected annual growth of 15 to 20 percent during the next five years.

Expanded efforts by the United States and other developed countries to evaluate the medical potential of coral reef species are urgently needed in particular because of the need for a new generation of specialized tools and processes for collection, identification, evaluation, and development of new bioproducts. The high cost and technical difficulties of identifying and obtaining marine samples, the need for novel screening technologies and techniques to maximize recovery of bioactive compounds, and difficulties in identifying a sustainable source or an organism for clinical development and commercial production are among the primary factors limiting marine bioprospecting activities.

The identification and extraction of natural products require major search and collection efforts. In the past, invertebrates were taken largely at random from reefs, often in huge quantities, but bioprospectors rarely provided an indication of the amount of organisms they were seeking, making it difficult to assess the impact associated with collection. Chemists homogenized hundreds of kilograms of an individual species in hopes of identifying a useful compound. This technique often yielded a suite of compounds, but each occurred in trace amounts that were insufficient for performing a wide range of targeted assays necessary to identify a compound of interest. For example, in one report a U.S. bioprospecting group collected 1,600 kg of a sea hare to isolate 10 mg of a compound used to fight melanoma. Another group collected 2,400 kg of an Indo-Pacific sponge to produce 1 mg of an anticancer compound. Yet, as much as 1 kg of a bioactive metabolite may ultimately be required for drug development.

Targeting a promising compound is only the first step; a renewable source for the compound must also be established before a new drug can be developed. Many suitable species occur at a low biomass or have a limited distribution, and in some cases a compound may occur only in species exposed to unusual environmental conditions or stressors. Because these compounds often come from rare or slow-growing organisms or are produced in minute quantities, collecting a target species in sufficient amounts for continued production of a new medicine may be unrealistic.

Sustainable management

It is estimated that less than 10 percent of coral reef biodiversity is known, and only a small fraction of the described species have been explored as a source of biomedical compounds. Even for known organisms, there is insufficient knowledge to promote their sustainable management. Unfortunately, a heavy reliance on coral reef resources worldwide has resulted in the overexploitation and degradation of many reefs, particularly those near major human populations. Managing these critical resources has become more difficult because of economic and environmental pressures, and human populations continue to grow.

Seahorses are a prime example of a resource that is rapidly collapsing. Demand for seahorses for use in traditional medicine increased 10-fold during the 1980s, and the trade continues to grow by 8 to 10 percent per year. With an estimated annual seahorse consumption of 50 tons in Asia alone, representing about 20 million animals supplied by 30 different countries, collection pressures on seahorses are causing rapid depletion of target populations. According to a study by Project Seahorse, seahorse populations declined worldwide by almost 50 percent between 1990 and 1995. In the absence of effective management of coral reefs and the resources they contain, many species that are promising as new sources of biochemical materials for pharmaceuticals and other products may be lost before scientists have the opportunity to evaluate them.

Expanded efforts to evaluate the medical potential of coral reef species are urgently needed.

Thus, as a first step in promoting continued biomedical research for marine natural products, countries must develop management plans for sustainable harvest of potentially valuable invertebrates. This must occur before large-scale extraction takes place. Because most of the desired species for biotechnology have little value as a food fishery, management strategies for sustainable harvest have been lacking, and much of the information needed on the population dynamics or the life history of the organisms is unknown. However, through joint efforts involving scientists, resource managers in the source country, and industry, it is possible to develop management plans that promote sustainable harvest, conservation, and equitable sharing of benefits for communities dependent on these resources.

For instance, researchers in the Bahamas identified a class of natural products, pseudopterosins, from a gorgonian coral (Pseudoterigorgia elisabethae) that have anti-inflammatory and analgesic properties. With help from the U.S.-funded National Sea Grant College Programs, the population biology of the species was examined in detail, with relevant information applied toward development of a management plan for sustainable harvest. This has allowed researchers to obtain sufficient supplies over a 15-year period without devastating local populations. By ensuring an adequate supply, this effort ultimately led to the purification of a product now used as a topical agent in an Estee Lauder skin care product, Resilience. In 1995, pseudopterosin was among the University of California’s top 10 royalty-producing inventions; today it has a market value of $3 million to $4 million a year.

Commercialization

New avenues for the commercial development of compounds derived from coral reef species may enhance the use of these resources and contribute to the global economy. If properly regulated, bioprospecting activities within coral reef environments may fuel viable market-driven incentives to promote increased stewardship for coral reefs and tools to conserve and sustainably use coral reef resources. These activities may also promote beneficial socioeconomic changes in poor developing countries.

Unfortunately, the difficulty in finding new drugs among the millions of potential species, the large financial investment involved, and long lead times that often take place before drugs can be brought to market has meant that the resources themselves have relatively low values. The anticancer metabolite developed from a common bryozoan, Bugula spp., is currently worth up to $1 billion per year. But the value of one sample in its raw form is only a few dollars. This makes it difficult to add significant value to coral reefs for conservation strictly on economic terms.

When bioprospecting has resulted in significant funds for conservation, special circumstances have been involved. The most success has been achieved when bioprospecting is carried out through international partnerships that include universities, for-profit companies, government agencies, conservation organizations, and other groups. Partnerships allow organizations to take advantage of differential expertise and technology, thereby providing cost-effective mechanisms for collection, investigation, screening, and development of new products. Partnerships also facilitate access to coral reef species, promote arrangements for benefit sharing, and assist in improving understanding of the taxonomy and biogeography of species of interest.

Many of the marine natural products partnerships negotiated in recent years between private firms and research institutes in developing countries have involved outsourcing by large R&D firms. In this approach, large companies engaged in natural products R&D work with suppliers, brokers, and middlemen in developing countries to obtain specimens of interest and with specialized companies that conduct bioassays or chemical purification of natural products. Through the development of contracts with several large pharmaceutical companies, Costa Rica was able to ensure that substantial funds were directed toward conservation. This was successful primarily because Costa Rica developed tremendous capacity to provide up-front work in taxonomy and initial screening of samples, which may not be the case in other developing countries.

An alternative approach often undertaken in the United States and Europe involves in-licensing, in which large R&D companies acquire the rights to bioactive compounds that have been previously identified by other firms or by nonprofit research institutes. For example, the National Cancer Institute (NCI) provides government research grants that support marine collecting expeditions and preliminary extraction, isolation, and identification of a compound and its molecular structure and novel attributes. Once a potentially valuable compound is identified, NCI may patent it and license it to a pharmaceutical company to develop, test, and market. In this approach, the company is required to establish an agreement with the source country for royalties and other economic compensation. In addition, scientists in the host country are invited to assist in the development of a new product, and the U.S. government guarantees protection of biodiversity rights and provides provisions for in-country mariculture of organisms that contain the compound, in the event that it cannot be synthesized.

The Convention on Biological Diversity (CBD) is leading an international effort to develop guidelines for access to coastal marine resources under jurisdictions of individual countries for marine biotechnology applications. The CBD is calling for conservation of biological diversity, the sustainable use of marine resources, and the fair and equitable sharing of benefits that arise from these resources, including new technologies, with the source country. Ratification of this agreement, from the standpoint of expanded development in marine biotechnology, requires that coastal nations agree on a unified regime governing access to marine organisms. Countries with coral reefs must also establish an acceptable economic value for particular marine organisms relative to the R&D investment of the biotech firm involved in the collection of the organism and the development of a new bioproduct. Although this type of international agreement would significantly affect the operations of the U.S. marine biotechnology industry, the United States cannot play an effective role in the process because it is not a party to the convention.

Options for sustainable use

The development and marketing of novel marine bioproducts can be achieved without depleting the resource or disrupting the ecosystem, but it requires an approach that combines controlled, sustainable collection with novel screening technologies, along with alternative sources for compounds of value. Instead of the large-scale collections that were formerly commonplace, more systematic investigations are now being undertaken, in which certain groups are targeted and the isolated materials are tested in a wide variety of screening assays. These collection missions involve the selective harvest of a very limited number of species over a broad area, with a focus on soft-bodied invertebrates that rely on chemical defenses for survival and marine microorganisms that coexist with these organisms. Assays used in major pharmaceutical drug discovery programs are also beginning to consider the function of the bioactive compounds in nature and their mechanisms of action, which can provide models for the development of new commercial products.

The ability to partition collections into categories of greater and lesser potential has raised the value of these species. For instance, sponges are ideal candidates for bioprospecting, because a single sponge can be populated by dozens of different symbiotic bacteria that produce an extraordinary range of chemicals. In Japan, researchers have examined more than 100 species of coral reef sponges for biomedical use, and more than 20 percent of them have been found to contain unique bioactive compounds. With greater knowledge of appropriate types of organisms for screening, companies may be willing to pay a premium for exclusive access to promising research prospects, thus creating an incentive to conserve ecological resources in order to charge access fees.

Investment incentives are needed to encourage partnerships to engage in marine natural products research.

With the advent of genomic and genetic engineering technologies, bioprospectors now have environmentally friendly and economically viable alternative screening tools. For any given species, a suitable sample consists of as little as 1 to 1.5 kilograms wet weight. In one screening approach, scientists collect small samples of an organism, extract the DNA from that species and its symbiotic microbes, and clone it into a domesticated laboratory bacterium. Thus, the genetically engineered bacterium contains the blueprint necessary to synthesize the chemical of interest, and it can ultimately create large quantities of the chemical without additional reliance on the harvest of wild populations.

Although synthetic derivatives provide an alternative to wild harvest, sometimes synthesis proves impossible or uneconomical; for example, in the case of an anticancer compound extracted from a sea squirt (tunicate). Mass production of a target species through captive breeding or mariculture may provide a consistent alternative supply. Many coral reef organisms that are in demand for the aquarium trade and the live reef food fish trade, and several invertebrates that contain valuable bioactive compounds, such as the sponges, are promising new species for intensive farming, and there are already a number of success stories. For example, sponge mariculture capitalizes on the ability of sponges to regenerate from small clippings removed from adult colonies. To minimize harvest impacts, only a small portion of the sponge needs to be removed for aquaculture; the cut sponge heals quickly and over time will regrow over the injury.

Mariculture offers another benefit as well. Through the use of selective husbandry or other mariculture protocols, it may be possible to select for a particular genetic strain of a species that produces a higher concentration of a metabolite of interest, thereby reducing the number of individuals needed for biotechnology applications. Mariculture can also provide a source of organisms to restock wild populations, which provides additional incentive for participation by a developing country with coral reef resources.

Four key steps

Coastal populations worldwide will continue to rely on coral reefs for traditional uses, subsistence, and commerce far into the future. In many cases, increased, unsustainable rates of collection coupled with pollution, habitat destruction, and climate change are threatening the vitality of these precious ecosystems. Coral reefs are vast storehouses of genetic resources with tremendous biomedical potential that can provide life-saving sources of new medicines and other important compounds, if these precious resources are properly cared for. To meet this challenge, research communities, government agencies, and the private sector must interact more effectively.

Through four key steps, the benefits of these activities can extend far beyond their medicinal potential to provide sustainable sources of income for developing countries and promote increased stewardship for the resources. First, there is a need for investment incentives to encourage partnerships among governments, local communities, academia, and industry to increase marine natural product research in coral reef environments. Second, those who stand to gain from the discovery of a new product must direct technical and financial assistance toward research and monitoring of the target species and the development and implementation of sustainable management approaches in exporting (developing) countries. Third, it is critical that biotech firms promote equitable sharing of benefits to include entire communities or source countries from which the raw materials come. Finally, expanded efforts are needed to reduce the demand for wild harvest and to improve the yield of bioactive compounds, including mariculture and selective husbandry and genomic and genetic engineering.

Without environmentally sound collection practices, only a few will benefit financially from new discoveries, and only over the short term. In the long term, communities may ultimately lose the resources on which they depend. Many species will perish, including those new to science, along with their unrealized biomedical potential. The ultimate objective of marine biotechnology should not be to harvest large volumes and numbers of species for short-term economic gains, but rather to obtain the biochemical information these species possess without causing negative consequences to the survival of the species and the ecosystems that support them. We must strive for a balance among the needs of human health, economics, and the health of our coral reefs, all of which are inextricably intertwined. This approach will ensure that marine resources that may prove valuable in the fight against disease will be available for generations to come.

A Sweeter Deal at Yucca Mountain

As this is written in the late winter of 2002, the stage is set for a struggle in Congress over whether to override the impending Nevada veto of President Bush’s selection of the Yucca Mountain nuclear waste disposal site. The geologic repository that would be built there for spent fuel from nuclear reactors and for highly radioactive defense waste would be the first such facility anywhere in the world. The criticism and doubts raised about the president’s decision are cause enough–even for one long convinced that the place for the repository is Nevada–to wonder whether the Yucca Mountain project can be licensed and built.

Where I come out is, yes, the U.S. Senate and House of Representatives should overturn the Nevada veto. The accelerated procedures afforded by the Nuclear Waste Policy Act of 1982 proscribe the filibustering and other parliamentary tactics that otherwise might block this present chance for the greatest progress yet on a nuclear waste problem that has eluded solution for over three decades. But still confronting the project if the Nevada veto is overturned will be the multitudinous law suits that the state is bringing against it. Even if they fall short on the merits, these suits could raise to new levels Nevada’s bitterness toward the project, further intensify distrust of the site and how it was chosen, and delay for several years a licensing application to the U.S. Nuclear Regulatory Commission. What is required of Congress in these circumstances is not just an override of the state veto but also major new amendments to the Nuclear Waste Policy Act strengthening the Yucca Mountain project financially, technically, and politically.

Congress must, above all, seek a dramatic reconciliation between Washington and the state of Nevada. The goal should be a greater spirit of trust, an end to the lawsuits, substantial direct and collateral economic benefits for Nevada, a stronger influence for the state in the Yucca Mountain project, and a stronger University of Nevada, the state’s proudest institution. A possibility to consider would be for congressional leaders to invite the Nevada delegation on Capitol Hill to join with them in a collaborative legislative effort to establish in Nevada a new national laboratory on nuclear waste management.

The Nevadans could look to their own inventiveness in any such initiative, aware of course that the final product will come about from much pulling and hauling from diverse quarters and diverse interests. Here we put forward a few possibilities that might go into the mix. Although the new laboratory would be created as a permanent institution with a broad mandate, central to that mandate in the beginning would be to take over direction of the Yucca Mountain project from the U.S. Department of Energy. Equipped with its own hot cells and other facilities for handling radioactive materials, the laboratory could assume a hands-on role in much of the high-end research and development work that is now done by project contractors. Its director, appointed by the president for a fixed term of, say, seven years, and removable only by the president, could be a far stronger administrator than the nuclear waste program has ever had before and one who is allowed wide latitude. Indeed, should the director come to conclude that not even with the best science and engineering can Yucca Mountain be made a workable site, the director could go to the president and the Congress and recommend its rejection in favor of finding another candidate site, whether in Nevada or elsewhere.

An advisory committee chaired by the Governor of Nevada would follow the laboratory’s work closely and be aided in this by a selective, well staffed group similar to the existing congressionally mandated, presidentially appointed Nuclear Waste Technical Review Board. Funding of the Yucca Mountain project and other activities under the Energy Department’s present Office of Civilian Radioactive Waste Management would continue to come from the Nuclear Waste Fund and the user fee on nuclear-generated electricity, but the new laboratory’s activities not covered by this dedicated funding would be dependent on other congressional appropriations.

Realistically, growth of the new lab would come, to one degree or another, at the expense of other national laboratories, particularly the existing nuclear weapons laboratories (Lawrence Livermore in California and Los Alamos and Sandia in New Mexico) where access for outside scientists and graduate students is severely constrained by their highly classified defense work. Creating the new lab would for some members of Congress be politically painful. But that would simply be part of the price for a successful Yucca Mountain project and, over the longer term, for new and more effective nuclear waste management initiatives across a much broader front.

Where would the new laboratory be located? At Yucca Mountain? In the vicinity of the University of Nevada’s home campus in Reno or near its new campus in Las Vegas? These would be delicate and important questions for Nevadans, but the new lab would surely bring new strength to the University in a variety of ways.

Of course, a great threshold question is whether there is any chance of Nevada’s political leaders actually doing an about-face and accepting a reconciliation allowing the Yucca Mountain project to go forward? It’s no sure thing, but consider the following: By the fall of 2002 Congress may already have overridden the Nevada veto, possibly by a comfortable margin. Also, the Nevada leaders will know that if their lawsuits should succeed only in delaying the project, the state’s leverage for gaining major concessions from Congress ultimately will either vanish or be sharply reduced. Furthermore, the University of Nevada and many businesses may see a national laboratory in the state starting or encouraging major new economic activities for Nevada, not just for nuclear waste isolation but also for other high tech work for government and private industry.

More money, more research

Financially, Congress could give both the project and the new national laboratory a major boost by designating the waste program as a mandatory account that is no longer to be denied half or more of the money collected each year from utility ratepayers in user fees on nuclear energy. In fiscal 2001 the fee revenue totaled $880 million. Moreover, an unexpended balance of nearly $12 billion has been allowed to pile up in the Nuclear Waste Fund in order to reduce the federal budget deficit. Congress must now forego this budgetary sleight of hand and ensure that the needs of the Yucca Mountain project are properly met.

Technically, Congress should have the project assume an exploratory thrust going far beyond anything now contemplated by DOE. It could in a general way urge the new Nevada laboratory to consider an innovative phased approach for testing current plans and exploring attractive technical alternatives. The licensing application might call for two or more experimental waste emplacement modules to confirm the engineering feasibility of project plans.

Project reviewers, who include many proponents of a phased approach to repository development, could help identify new possibilities worthy of a trial. For instance, scientists at the Oak Ridge National Laboratory in Tennessee favor a concept of enveloping spent fuel with depleted uranium within the waste containers. They see this concept as doubly attractive, affording both greater assurance of waste containment and safe disposal of much of the nation’s environmentally burdensome inventory of depleted uranium. Some 600,000 tons of depleted uranium sits outside in aging steel cylinders at the two inactive uranium enrichment plants at Oak Ridge, Tennessee, and Portsmouth, Ohio, and the still active plant at Paducah, Kentucky. A decay product of depleted uranium is the dangerously radioactive radium-226.

Depleted uranium dioxide in a granular form could be used to fill voids in the waste containers and also be embedded in steel plating to create a tough, dense layer nearly 10 inches thick just inside the containers’ thin corrosion-resistant outer shell. It would be meant to serve as a sacrificial material, grabbing off any oxygen entering the containers and delaying for many thousands of years degradation of the spent fuel.

Congress should have the project assume an exploratory thrust going far beyond anything now contemplated by DOE.

A number of close followers of the Yucca Mountain project, in Nevada and elsewhere, doubt that its weaknesses will ever be overcome. But in my view the problems are curable and the purported alternatives are either illusory or unacceptable. The default solution if geologic isolation of spent fuel and high-level waste fails is continued surface storage. In principle, this could involve beginning central storage in Nevada or elsewhere, but unfortunately what is far more likely is for storage to remain for many years at the some 131 sites in 39 states where the spent fuel and high-level waste are stored now. Indeed, the political effect of a congressional rejection of the project could be to freeze virtually all further movement of this material. With no Yucca Mountain project, there would be no foreseeable prospect of permanent disposal anywhere.

Fourteen years ago, Congress abandoned the effort to screen multiple candidate repository sites by enacting the NWPA Amendments of 1987. The narrowing of the search to Yucca Mountain was “political,” to be sure, but it was also sensible and practical viewed on the merits. The cost of “characterizing” sites, put at not more than $100 million per site in 1982, was soaring, although no one could then foresee that by 2002 characterization of the Yucca Mountain site alone would exceed $4 billion. Moreover, Yucca Mountain offered clear advantages compared to the other two sites still in the running. A repository at Hanford, Washington, in highly fractured lava rock was to have been built deep within a prolific aquifer, posing a high risk of catastrophic flooding. A repository in the bedded salt of Deaf Smith County, Texas, would have penetrated the Ogallala Aquifer, a resource of great political sensitivity in that very rich agricultural county.

A search for a second repository site in the eastern half of the United States was abruptly terminated by the Reagan administration in 1986 essentially because the political price had become too great. Four U.S. Senate seats were at stake in the seven states most targeted by this search and the Republican candidates were becoming increasingly imperiled. Today, few believe Congress will ever reopen the search for repository sites.

A stronger project

Managers of the Yucca Mountain project may have unwittingly set a trap for themselves by choosing to make the case for licensing by relying far less on the mountain’s natural hydrogeologic characteristics to contain radioactivity than on the engineered barriers that they propose. These barriers are principally an outer shell of nickel alloy for the massive spent-fuel and high-level waste containers, plus a titanium “drip shield” to go above the containers. The cost of the two together is put at $9.8 billion (year 2000 dollars).

Quantifying the effectiveness of a well-defined engineered barrier might at first appear easier than determining the effectiveness of a natural system that is mostly hidden inside the mountain and only partly understood. But in truth the uncertainties associated with the one may be every bit as great as those associated with the other. The corrosion resistance over thousands of years of the chosen alloy or any other manmade material is simply not known, and experts retained by Nevada can point to corrosion processes that might well compromise the proposed barriers.

Granted, the uncertainties as to waste containment associated with the natural system are significant. Into the early 1990s project managers felt sure that since the repository horizon is 800 feet above the water table, waste containers would stay dry for many thousands of years and thus be protected from corrosion. But there has since been evidence (albeit ambiguous and now under intense review) of a small amount of water infiltrating the mountain from the surface and reaching the repository level within several decades. Given the less arid climate expected in the future, somewhat more water could be present to infiltrate, although any flows of water reaching waste emplacement tunnels might simply go through fractures to deeper horizons without affecting waste containers. But an additional concern has to do with water contained within pores in the rock causing a high general humidity.

The U.S. Geological Survey has formally supported selection of Yucca Mountain for repository development, although with conditions. The Nuclear Waste Technical Review Board sees no reason for disqualifying the site but characterizes the technical work behind the project performance assessment as “weak to moderate.”

An unresolved design issue is whether to allow an emplacement density for heat-generating spent fuel that would raise the temperature of the rock near waste containers above the boiling point of water, a question that bears directly on the extent of the repository’s labyrinth of emplacement tunnels. In view of this and other unresolved issues, whether the project can meet its target of filing a licensing application by 2004 is hotly disputed. But a delay of a few years or possibly even longer might be desirable in any case, affording time for project plans to include test modules for innovative engineered barriers that could strengthen the case for licensing–and allowing time for new institutional arrangements to fall into place if a new national laboratory were to assume direction of the project.

To sum up, at this critical juncture in our long tormented quest for a spent-fuel and high-level waste repository, three things appear needed. First, an override by Congress of Nevada’s veto of the Yucca Mountain site. Next, amendments to the Nuclear Waste Policy Act to encourage a profound political reconciliation between Nevada and Washington and to make the repository project stronger financially and technically. Finally, an aggressively exploratory design effort to ensure a repository worthy of our confidence in the safe containment of radioactivity over the long period of hazard.

A Makeover for Engineering Education

Hollywood directors are said to be only as good as their last picture. Maintaining their reputations means keeping up the good work–continuing to do encores that are not only high-quality but that fully reflect the tastes and expectations of the time.

A similar measure applies to engineers. Though we are fresh from a whole century’s worth of major contributions to health, wealth, and the quality of life, there is trouble in paradise: Tthe next century will require that we do even more at an even faster rate, and we are not sufficiently prepared to meet those demands, much less turn in another set of virtuoso performances.

The changing nature of international trade and the subsequent restructuring of industry, the shift from defense to civilian applications, the use of new materials and biological processes, and the explosion of information technology–both as part of the process of engineering and as part of its product–have dramatically and irreversibly changed the practice of engineering. If anything, the pace of this change is accelerating. But engineering education–the profession’s basic source of training and skill–is not able to keep up with the growing demands.

The enterprise has two fundamental, and related, problems. The first regards personnel: Fewer students find themselves attracted to engineering schools. The second regards the engineering schools, which are increasingly out of touch with the practice of engineering. Not only are they unattractive to many students in the first place, but even among those who do enroll there is considerable disenchantment and a high dropout rate (of over 40 percent). Moreover, many of the students who make it to graduation enter the workforce ill-equipped for the complex interactions, across many disciplines, of real-world engineered systems. Although there are isolated “points of light” in engineering schools, it is only a slight exaggeration to say that students are being prepared to practice engineering for their parents’ era, not for the 21st century.

What’s needed is a major shift in engineering education’s “center of gravity,” which has moved virtually not at all since the last shift, some 50 years ago, to the so-called “engineering science” model. This approach–which emphasizes the scientific and mathematical foundations of engineering, as opposed to empirical design methods based on experience and practice–served the nation well during the Cold War, when the national imperative was to build a research infrastructure to support military and space superiority over the Soviet Union. But times have clearly changed, and we must now reexamine that engineering-science institution, identify what needs to be altered, and pursue appropriate reforms.

An agenda for change

Engineering is not science or even just “applied science.” Whereas science is analytic in that it strives to understand nature, or what is, engineering is synthetic in that it strives to create. Our own favorite description of what engineers do is “design under constraint.” Engineering is creativity constrained by nature, by cost, by concerns of safety, environmental impact, ergonomics, reliability, manufacturability, maintainability–the whole long list of such “ilities.” To be sure, the realities of nature is one of the constraint sets we work under, but it is far from the only one, it is seldom the hardest one, and almost never the limiting one.

Today’s student-engineers not only need to acquire the skills of their predecessors but many more, and in broader areas. As the world becomes more complex, engineers must appreciate more than ever the human dimensions of technology, have a grasp of the panoply of global issues, be sensitive to cultural diversity, and know how to communicate effectively. In short, they must be far more versatile than the traditional stereotype of the asocial geek.

These imperatives strongly influence how a modern engineer should be educated, which means that he or she requires a different kind of education than is currently available in most engineering schools. In particular, we see six basic areas in great need of reform:

Faculty rewards. Engineering professors are judged largely by science-faculty criteria–and the practice of engineering is not one of them. Present engineering faculty tend to be very capable researchers, but too many are unfamiliar with the worldly issues of “design under constraint” simply because they’ve never actually practiced engineering. Can you imagine a medical school whose faculty members were prohibited from practicing medicine? Similarly, engineering professors tend to discount scholarship on the teaching and learning of their disciplines. Can we long tolerate such stagnation at the very source of future engineers? (These perceptions of engineering faculty are not merely our own. When the National Academy of Engineering convened 28 leaders from industry, government, and academia in January 2002 to discuss research on teaching and learning in engineering, the retreat participants agreed that although an increased focus on scholarly activities in engineering teaching and learning is much needed, the current faculty-reward system does not value these activities.)

Curriculum. Faculty’s weakness in engineering practice causes a sizeable gap between what is taught in school and what is expected from young engineers by their employers and customers. The nitty-gritty of particular industries cannot, and should not, be included in the curriculum–particularly for undergraduates. But although everyone pretty much agrees that students will continue to need instruction in “the fundamentals,” the definition of this term has been rapidly changing. Whereas physics and continuous mathematics largely filled the bill for most of the 20th century, there are now additional fundamentals. For example, discrete mathematics (essential to digital information technology), the chemical and biological sciences, and knowledge of the global cultural and business contexts for design are now important parts of an engineer’s repertoire.

The first professional degree. We can’t just add these “new fundamentals” to a curriculum that’s already too full, especially if we still claim that the baccalaureate is a professional degree. And therein lies the rub: Whereas most professions–business, law, medicine–do not consider the bachelor’s degree to be a professional degree, engineering does. Maintaining such a policy in this day and age is a disservice to students, as it necessarily deprives them of many of the fundamentals they need in order to function; and it is a misrepresentation to employers.

Formalized lifelong learning. It has been said that the “half-life” of engineering knowledge–the time in which half of what an engineer knows becomes obsolete–is in the range of two to eight years. This means that lifelong learning is essential to staying current throughout an engineering career, which may span some 40 years. Yet the notion, at least as a formalized institution, has not been part of the engineering culture. This has to change, as merely taking training in the latest technology is not good enough. The fundamentals you learned in college are still fundamental, but they aren’t the only ones in this rapidly changing profession.

Diversity. An essential aspect of service to society is inclusiveness–the need to “leave no child behind.” But although diversity in our engineering schools has improved in recent years, we’ve leveled off. Fewer than 20 percent of entering freshmen are women, and underrepresented minorities account for just over 16 percent. Among the nation’s engineering faculty, the numbers are worse: Fewer than 10 percent are women, and fewer than 5 percent are underrepresented minorities. Another way to look at the situation is this: Although minority men and all women represent 65 percent of the general population, they comprise only 26 percent of the B.S. graduates in engineering. Such figures are unacceptable, and not just as an equity issue. It’s a workforce issue and, even more important, it’s a quality issue. Our creative field is deprived of a broad spectrum of life experiences that bear directly on good engineering design. Put more bluntly, we’re not getting the bang for the buck that we should.

Technological literacy in the general population. Thomas Jefferson founded the University of Virginia in the conviction that we could not have a democracy without an educated citizenry. Given that technology is now one of the strongest forces shaping our nation, we think he would consider our present democracy imperiled. Though our representatives in Congress are regularly called upon to vote on technology-based issues that will profoundly affect the nation, they and the people who elect them are, for the most part, technologically illiterate. Engineering schools have not traditionally provided courses for non-engineering majors, but in our view it’s time they did. These courses will not be of the kind we are accustomed to teaching, as they’ll relate technology and the process of creating it–that is, engineering–to larger societal issues. But noblesse must oblige: Technological literacy is now essential to citizens’ pursuit of a better and richer life.

Steps in the right direction

Clearly, a great deal needs to be changed, and the scale of the challenge can be daunting. But enlightened, come-from-behind reinvention is nothing new to our society.

Consider recent turnarounds in the business sector, aided by methods that may similarly benefit education. Twenty years ago, U.S. industry was seriously lagging its counterparts in other countries, but U.S. companies found answers in modern quality-improvement techniques. A technique called “Six Sigma,” for example–used with great success by Motorola, General Electric, and Allied Signal, among others–basically forces you to identify the product, the customer, the current processes for making and delivering it, and the sources of waste. Then you redesign the system and evaluate it once again. This procedure continues indefinitely, resulting in a practice of constant reevaluation and reform.

By applying such standards of industrial quality control to engineering education, we could well create more excitement, add more value, and get more done for students in less time. Many of the seemingly insuperable problems of the largely arrested academic enterprise could yield imaginative answers.

One area of much-needed answers is the “supply side” issue: How can engineering schools attract more bright young people out of high school? Part of the solution, we believe, is a massive engineering-mentor program. Think of it as every engineer in the country identifying, say, four students with an interest in engineering and essentially adopting them for the duration of their school years–not just to give occasional encouragement but to stick with them and really guide them.

Many people in the profession stayed with engineering because at critical points in their careers they experienced the helping hand and timely advice of a mentor. Similarly, we could be there for these kids when the going gets tough and they are tempted to abandon engineering for an easier alternative. Eventually, like us, they will get hooked on engineering when they experience the thrill of invention–of bringing their skills to bear on a problem and achieving a useful and elegant solution, on time, on budget, and within all the other practical constraints. But until then, there needs to be the continuous support and interest of a mentor.

Numerous other innovations, both for increasing the supply of engineering students and improving the quality of their education, are possible. Now they will be more probable with the recent adoption, by the Accreditation Board for Engineering and Technology (ABET), of new and flexible criteria for putting authoritative stamps of approval on engineering schools’ curricula. Unlike previous criteria, which were rigidly defined, the Engineering Criteria 2000 encourage each school to be outcome-oriented, to define its own niche and structure its curriculum accordingly. This is a huge step in the right direction, liberating faculty to propose virtually any modification they deem appropriate, which may then be evaluated by ABET against the school’s goals. Essentially, the new criteria say: You can do that; just do it well!

Accreditation, though necessary, is not sufficient. When an innovation is in place and showing itself to be effective, it also needs to be publicly recognized so that it may be replicated or serve as an inspiration for similar efforts elsewhere. One mechanism for this process is the recently established Bernard M. Gordon Prize for Innovation in Engineering and Technology Education. Awarded by the National Academy of Engineering (NAE), it is a prominent way to highlight novel teaching methods that motivate and inform the next generation of engineering educators.

The Gordon Prize, which carries a cash award of $500,000 divided equally between the recipient and his or her institution, was presented for the first time this past February to Eli Fromm, professor of electrical and computer engineering and director of the Center for Educational Research at Drexel University’s College of Engineering. He was cited for implementing “revolutionary ideas that are showing dramatic results in areas such as student retention and minority involvement in engineering studies.” In particular, Fromm established the Enhanced Education Experience for Engineers (E4) program, in which faculty members from diverse disciplines teach side-by-side with engineering colleagues in a hands-on, laboratory atmosphere. The aim is to build students’ communication skills, expand their knowledge of business, and give them a deeper understanding of the design process itself.

This E4 program has now expanded to seven other academic institutions–under the new name of Gateway Engineering Education Coalition–and participating schools report an 86 percent increase in the retention of freshmen. They also note that the number of engineering degrees they now award to women has shot up by 46 percent, to Hispanics by 65 percent, and to African-Americans by 118 percent.

Organizations send a message

A basic condition for the reform of engineering education is to change the attitudes of engineering faculty, and one good way to win hearts and minds is by their professional organizations–especially those positioned to reward individuals’ achievements–conspicuously taking up the cause.

The NAE, whose membership consists of the nation’s premier engineers recognized by their peers for seminal contributions, is one such organization, perhaps the country’s most prestigious. And it is strongly committed to moving engineering education’s center of gravity to a position relevant to the needs of 21st-century society. We refer to the Academy’s programs in this area as our “four-legged stool”:

First, we’ve reaffirmed that high-quality contributions to engineering education are a valid reason for election to the NAE. This criterion makes it clear that people’s creativity and excellence in engineering education can be rewarded in the same ways as outstanding technological contributions.

Second, we’ve established a standing committee of the Academy’s Office of the President–called, naturally enough, the Committee on Engineering Education –that identifies significant issues, organizes studies, develops long-term strategies, recommends specific policies to appropriate government agencies and academic administrations, coordinates with other leading groups in engineering and related fields, and encourages public education and outreach.

Third, we have created the Gordon Prize, essentially the “Nobel Prize” for engineering educators.

And fourth, the NAE is in the process of forming its very own center for focused research projects on teaching and learning in engineering. Usually we at the National Academies study things and then recommend that somebody else do something. Here we wish to also be implementers, developing innovative methods and disseminating the best results–our own as well as those of others.

Each of these initiatives serves a double purpose: developing or recognizing particular innovations and making the NAE’s imprimatur quite visible. The hope is that our activities send a message, particularly to engineering faculty throughout the country, that the Academy attaches great value to creative work in engineering education and wishes to acknowledge and spread the best ideas.

Other influential bodies must similarly get involved in this revitalization process, so that their efforts are mutually reinforcing. For example, we believe that most of what NAE is now trying to do in teaching and learning would not have been possible without ABET’s Engineering Criteria 2000.

Basically, to revitalize engineering education we must first and foremost change educators’ attitudes. Only then can engineering schools produce the open-minded and versatile modern engineers capable of making improvements to our quality of life–and to that of people around the world.

The average person today enjoys a great many advantages, most of them the result of engineering. But because we live in a time of rapid change, engineers in current practice face issues that little constrained their predecessors; and engineers we educate today will be practicing in future environments likely to be very different from our own. Thus if engineering education does not change significantly, and soon, things will only get worse over time.

The problem has now been studied to death, and the essential solution is clear. So let’s get on with it! It’s urgent that we do so.

Updating Automotive Research

On January 9, 2002, Department of Energy (DOE) Secretary Spencer Abraham announced a new public-private cooperative research program with the three major domestic automakers. According to a press release, the program would ” promote the development of hydrogen as a primary fuel for cars and trucks, as part of our effort to reduce American dependence on foreign oil … [and] … fund research into advanced, efficient fuel cell technology, which uses hydrogen to power automobiles.” Called FreedomCAR (with CAR standing for cooperative automotive research), the program replaces the Partnership for a New Generation of Vehicles (PNGV), which was launched by the Clinton administration with great fanfare in 1993.

The reaction to FreedomCAR, as reflected in press headlines, was largely skeptical. “Fuelish Decision,” said the Boston Globe. “Fuel Cell Fantasy,” stated the San Francisco Chronicle. A Wall Street Journal editorial asserted that fuel cells were expensive baubles that wouldn’t be plausible without vast subsidies. Automotive News, the main automotive trade magazine, expressed caution, stating that, “FreedomCAR needs firm milestones… Otherwise it will be little more than a transparent political sham.”

DOE has since released a tentative set of proposed performance goals for vehicle subsystems and components, which were immediately endorsed by the three automakers. Nonetheless, skepticism about the program continues, which is not surprising given the Bush administration’s ambivalence toward energy conservation and tighter fuel economy standards. Yet viewed strictly as an updating of PNGV, FreedomCAR is a fruitful redirection of federal R&D policy and a positive, albeit first step toward the hydrogen economy. However, for FreedomCAR to become an effective partnership and succeed in accelerating the commercialization of socially beneficial advanced technology, additional steps will need to be taken.

What was PNGV?

The goal of PNGV was to develop vehicles with triple the fuel economy of current vehicles [to about 80 miles per gallon (mpg) for a family sedan], while still meeting safety and emission requirements and not increasing cost. It was in part an attempt to ease the historical tensions arising from the adversarial regulatory relationship between the automotive industry and federal government. It would “replace lawyers with engineers” and focus on technology rather than regulation to improve fuel economy. It also reflected the government’s recognition that the nation’s low fuel prices resulted in an absence of market forces needed to “pull” fuel-efficient technology into the marketplace. As the technical head of the government’s side of the partnership said in a 1998 Rand report: “It is fair to say that the primary motivation of the industry was to avoid federally mandated fuel efficiency and emissions standards.”

PNGV was managed by an elaborate federation of committees from the three car companies and seven federal agencies. The government’s initial role was to identify key technology projects already being supported by one of the participating agencies. Industry teams determined which projects would be useful and whether additional or new research was needed. Throughout the process, technical decisions were made by industry engineers in collaboration with government scientists.

PNGV was high-profile. It engaged leaders at the highest levels and was championed by Vice President Gore. It was also subjected to extraordinary scrutiny, with a standing National Research Council (NRC) committee conducting detailed annual reviews.

The lofty rhetoric about and intense interest in PNGV did not, however, result in increased federal funding of advanced vehicle R&D. PNGV’s budget has always been controversial, with critics dubbing it “corporate welfare.” The ambitious program was realized by moving existing federal programs and funds under the PNGV umbrella. Funding for the PNGV partnership remained relatively steady at about $130 million to $150 million per year (or $220 million to $280 million if a variety of related federal programs not directly tied to PNGV goals are included).

From the start, the corporate welfare criticism was largely unfounded and became less so over time. Initially, about one-third of PNGV funding went to the automakers. That was largely carried over from already existing programs, and most of it was passed through to suppliers and other contractors. In any case, the amount steadily dropped to less than 1 percent by 2001. Although definitive data are not available, in the latter years of the program, more than half of the funding went to the national energy labs, and most of the rest went to a variety of government contractors, automotive suppliers, and nonautomotive technology companies, with universities receiving well under 5 percent. The automakers also provided substantial matching funds, though a major portion of this spending was in proprietary product programs.

The relevant issue with regard to automakers should not have been corporate welfare but how the research was prioritized and funds were spent. The three automakers played a central role for several reasons: As the final vehicle assembler and ultimate technology user, they had the best insight and judgment about research priorities, the greater expertise and staff resources to assess development priorities to meet consumer preferences, and the ability and resources to lobby Congress on behalf of the PNGV program.

Another issue with PNGV was the use of a specific product as the goal. In general, it is wise to direct a program’s activities toward a specific tangible goal, and a prototype often fulfills that role. But in the case of PNGV, the goal for 2004 of building an 80-mpg production prototype that would cost no more to build than a conventional car was flawed. One problem is that government and industry managers were so focused on meeting the affordability goal that they felt obligated to pick technology– small advanced diesel engines combined with electric power trains–that was similar to existing technology and not the most promising in terms of societal benefits. Diesel engines have inherently high air pollutant emissions, and it is unknown whether they can meet U.S. environmental standards. In addition, neither advanced diesel nor hybrid electric engines are longer-term technologies. Honda and Toyota are already commercializing early versions of these technologies: Toyota began selling hybrid electric cars in Japan in 1997, and both Toyota and Honda began selling them in the United States in 2000. More fundamentally, as the final NRC committee review of the program so succinctly stated, “It is inappropriate to include the process of building production prototypes in a precompetitive, cooperative industry-government program. The timing and construction of such a vehicle is too intimately tied to the proprietary aspects of each company’s core business to have this work scheduled and conducted as part of a joint, public activity.”

Even the interim goal of hand-built concept prototypes by 2001 was questionable. Indeed, the goal of public-private partnerships with automakers should not be prototype vehicles. Automakers have garages full of innovative prototypes. What is needed is accelerated commercialization of socially beneficial technology.

Still, in some ways, PNGV was a success. Milestones were achieved on schedule; communication between industry and government reportedly improved; new technologies were developed, and some were used to improve the efficiency of conventional vehicle subsystems and components; the program disciplined federal advanced technology R&D efforts; scientific and technological know-how was transferred from the national labs; and apprehensive foreign competitors responded to the program with aggressive efforts of their own, which in turn sparked an acceleration of the U.S. efforts.

From a societal perspective, this boomerang effect may have been most important, because the foreign automakers feared that this partnership between the richest country and three of the largest automakers in the world would create the technology that would dominate in the future. New alliances (the European Car of Tomorrow Task Force and the Japan Clean Air Program) were formed. Toyota and Honda accelerated the commercialization of hybrid electric cars. Daimler Benz launched an aggressive fuel cell program. Ford reacted in turn by buying into the Daimler-Ballard fuel cell alliance and announcing plans to market hybrid electric vehicles in 2003. General Motors followed by dramatically expanding its internal fuel cell program, creating technology partnerships with Toyota, and buying into a number of small hydrogen and fuel cell companies. Struggling Chrysler, with its minimal advanced R&D capability, merged with Daimler Benz.

Why fuel cells and hydrogen?

Fuel cells provide the potential for far greater energy and environmental benefits than diesel-electric hybrids. Hydrogen fuel cell vehicles emit no air pollutants or greenhouse gases and would likely be more than twice as energy-efficient as internal combustion engine vehicles. When hydrogen is made from natural gas, as most of it will be for the foreseeable future, air pollution and greenhouse gases are generated at the conversion site (a fuel station or large, remote, centralized fuel-processing plant), but in amounts far less than those produced by comparable internal combustion engine vehicles.

Fuel cell vehicles are close to commercialization, but no major company has initiated mass production. In 1997, Daimler Benz announced that it would produce more than 100,000 fuel cell vehicles per year by 2004, and other automakers chimed in with similar forecasts. That initial enthusiasm quickly waned. Now, in 2002, several companies plan to place up to 100 fuel cell buses in commercial service around the world by the end of 2003 (none in the United States); Toyota has announced plans to sell fuel cell cars in Japan for $75,000, also in 2003, as has Honda; and a variety of automakers plan to place hundreds of fuel cell cars in test fleets in the United States, mostly in California, in that same time frame. The new conventional wisdom is that by 2010, fuel cell vehicles will progress to where hybrid electric cars are today, selling 1,000 to 2,000 per month in the United States, and that sales in the hundreds of thousands would begin two to three years later.

Energy companies must be brought into the partnership, because of their key role in the transition to fuel cell vehicles.

Two energy scenarios released in the fall of 2001 by Shell International suggest the wide range of possible futures. In one scenario, Shell posited that 50 percent of new vehicles would be powered by fuel cells in 2025 in the industrialized countries. In the second scenario, hybrid electric and internal combustion vehicles would dominate, with fuel cells limited to market niches.

Three key factors are slowing commercialization: low fuel prices, uncertainty over fuel choice, and the time and resources needed to reduce costs. Costs are expected to drop close to those of internal combustion engines eventually, but considerable R&D and engineering is still needed. Current fuel cell system designs are far from optimal. Consider that internal combustion engines, even after a century of intense development, are still receiving a large amount of research support to improve their efficiency, performance, and emissions (far more, even now, than is being invested in fuel cell development). Fuel cells are at the very bottom of the learning curve.

The fuel issue may be more problematic. Hydrogen is technically and environmentally the best choice, but it will take time and money to build a fuel supply system. Investments in hydrogen and hydrogen fuel cell vehicles by energy suppliers and automakers are slowed by the chicken-and-egg dilemma. Alternatively, methanol, gasoline, or gasoline-like fuels can be used, simplifying the fuel supply challenge, but the cost, complexity, energy, and environmental performance of vehicles would be degraded. As late as mid-2001, the conventional wisdom in industry was that gasoline or gasoline-like fuels would be used initially, followed later by hydrogen. Now, in the wake of the FreedomCAR announcement, a direct transition to hydrogen is gaining appeal.

Is FreedomCAR good policy?

Although FreedomCAR is an overdue corrective action, it is hardly a major departure. For one thing, fuel cell R&D was already gaining a greater share of PNGV funding (from about 15 percent of the DOE PNGV funds in the mid-1990s to about 30 percent in 2001), as automakers increasingly kept their knowledge about hybrid vehicle technology proprietary. Moreover, it appears that no major overhaul will take place as PNGV is turned into FreedomCAR. The program structure and the management team will remain essentially the same. Funding for fuel cell research will be increased slightly and funding for internal combustion engine research decreased slightly. The plan to produce production prototypes in 2004 has been abandoned.

More R&D funding must go to universities to train the engineers and scientists who will design future generations of vehicles.

Perhaps of greater concern is automaker reluctance to expand industry engagement to energy companies. This will likely limit the overall effectiveness of the program, because uncertainty about hydrogen supply and distribution is arguably the single biggest factor slowing the transition to fuel cell vehicles. Other automakers, including the Japanese, should also be engaged, because they also are ultimate users of the technology. But perhaps the best use of limited government R&D funds may be to target 1) small innovative technology companies and larger technology companies that are not already major automotive suppliers; and 2) universities, because of their expertise in basic research, but equally because they will train the industry engineers and scientists who will design and build these vehicles in the future.

Finally, FreedomCAR does nothing, at least in the short run, to deal with the issues of fuel consumption and emissions. Fuel cell vehicles are not likely to gain significant sales before 2010, and perhaps even later. Given the reality of slow vehicle turnover, this means that fuel cells would not begin to make a dent in fuel consumption until at least 2015. Thus, if oil consumption and carbon dioxide emissions are to be restrained, more immediate policy action will be needed. If little or nothing is done in these areas, the Bush administration will continue to face the justifiable criticism that FreedomCAR is a means of short-circuiting the strengthening of the corporate average fuel economy standards.

Government’s role

Fuel cells and hydrogen show huge promise. They may indeed prove to be the Holy Grail, eventually taking vehicles out of the environmental equation, as industry insiders like to say. In a narrow programmatic sense, FreedomCAR is unequivocally positive as an updating and refashioning of the existing R&D partnerships and programs. Still, for a variety of reasons, including low fuel prices, industry still does not have a strong enough incentive to invest in the development and commercialization of this advanced, socially beneficial technology. Government will continue to have an important role to play.

The recommendations set forth below are premised on the understanding that government R&D is most effective when it targets technologies that are far from commercialization and have potentially large societal benefits, when funding is directed at more basic research, when the relevant industries are fragmented and have low R&D budgets; and when there is some mechanism or process for facilitating the conversion of basic research into commercial products. A strategy to promote sustainable cars and fuels must contain the following elements:


Advanced vehicle research, development, and education

  • Basic research directed at universities and national labs, especially focused on materials research and key subsystem technologies that will also have application to a wide range of other electric-drive vehicle technologies.
  • Leveraged funding of innovative technology companies.
  • Funding to universities to begin training the necessary cohort of engineers and scientists. This might merit creation of a second FreedomEDUCATION partnership (building on DOE’s small Graduate Automotive Technology Education centers program).

Hydrogen distribution

  • Assistance in creating a hydrogen fuel distribution system (with respect to safety rules, initial fuel stations, standardization protocol, pipeline rules, and so forth), requiring some R&D funding but in more of a facilitating role.
  • Funding to assist the development and demonstration of key technologies, such as solid hydrogen storage, and demonstration of distributed hydrogen concepts, such as electrolysis and vehicle-to-grid connections.

This activity might merit a third FreedomFUEL partnership.


Incentives and regulation

  • Incentives and rules that direct automakers and energy suppliers toward cleaner, more efficient vehicles and fuels.
  • Incentives to consumers to buy socially beneficial vehicles and fuels.

These three sets of strategies must all be pursued to ensure a successful and timely transition to socially beneficial vehicle and fuel technology. The last set of initiatives is particularly critical, not just to ensure a timely transition to fuel cells and hydrogen but also to accelerate the commercialization and adoption of already existing socially beneficial technologies, including hybrid electric vehicle technologies.

Solving the Broadband Paradox

If The Graduate were being filmed today, the one-word piece of advice that young Benjamin Braddock would hear is “broadband.” Most simply defined as a high-speed communications connection to the home or office, broadband offers Americans the promise of faster Internet access, rapid data downloads, instantaneous video on demand, and a more secure connection to a variety of other cutting-edge technologies and services.

If it were to become ubiquitously available throughout the United States, broadband communications services might finally make possible some long-dreamed-of commercial applications, including telecommuting, video conferencing, telemedicine, and distance learning. Beyond transforming the workplace, broadband could open new opportunities in the home for activities such as electronic banking, online gaming, digital television, music swapping, and faster Web surfing in general.

For these reasons, a growing number of pundits and policymakers are saying that Americans need broadband and they need it now. Moreover, assorted telecom, entertainment, and computer sector leaders are also proclaiming that the future of their industries depends on the rapid spread of broadband access throughout the economy and society. For example, Technology Network (Tech Net), one of the leading tech sector lobbying groups, is asking policymakers to commit to a JFK-esque “man on the moon” promise of guaranteeing 100 megabits per second (Mbps) connections for 100 million U.S. homes and small businesses by the end of this decade. This represents a bold–some would say unrealistic–vision for the future, considering that most Americans today are using a 56K narrowband modem connection and balking at paying the additional fee for a 1.5-Mbps broadband hookup.

What exactly is holding back the expansion of broadband services in America? Is a 100-Mbps vision within 10 years just a quixotic dream? What effect has regulation had on this sector in the past, and what role should public policy play in the future?

A digital white elephant?

As interesting as these questions are, the most important and sometimes forgotten question we should be asking first is: Do consumers really want this stuff? In the minds of many industry analysts, consumer demand for broadband services is simply taken for granted. Many policymakers see an inevitable march toward broadband and want to put themselves at the head of the parade. They have adopted the Field of Dreams philosophy: “If you deploy it, they will subscribe.”

But is this really the case? Are Americans clamoring for broadband? Are the benefits really there, and if so, do citizens understand them?

The answers to these questions remain surprisingly elusive for numerous reasons. This market is still in its infancy, and statistical measures are still being developed to accurately gauge potential consumer demand. Thus far, the most-quoted surveys have been conducted by private consulting and financial analysis firms. The cited results are all over the board, and critical evaluation is difficult because the full detailed analysis is available only to those who pay the hefty subscription fees. However, when one looks at government statistics about actual broadband use, it seems clear that the public has not yet caught broadband fever. According to the Federal Communications Commission (FCC), only 7 percent of U.S. homes subscribe to a high-speed access service connection, even though broadband access is available to roughly 75 to 80 percent of U.S. households. A clear paradox seems to exist in the current debate over this issue: Everyone is saying the public demands more broadband, yet the numbers don’t yet suggest they really do. What gives?

The FCC’s recently issued Third Report on the Availability of High Speed and Advanced Telecommunications Capability concluded that broadband was being made available to Americans in a “reasonable and timely fashion.” The report noted that over 70 percent of homes have cable modem service available to them, 45 percent have telco-provided digital subscriber line (DSL) service available, 55 percent of Americans have terrestrial fixed wireless broadband options, and almost every American household can purchase satellite-delivered broadband today.

Importantly, however, the FCC concluded that although broadband was within reach of most U.S. homes, most households were not yet subscribing. The FCC report notes that, “cost appears to be closely associated with the number of consumers willing to subscribe to advanced services.” It cites one private-sector survey that revealed that 30 percent of online customers were willing to pay $25 per month for broadband, but only 12 percent were willing to pay $40. Broadband service currently costs $40 to $50 per month on top of installation costs. This is a lot of money for the average household, especially when compared to other monthly utility bills.

And therein lies the real reason why broadband subscribership remains so sluggish: Most Americans still view broadband as the luxury good it really is instead of the life necessity that some policymakers paint it to be. Not every American needs, or even necessarily wants, a home computer or a connection to the Internet. This is especially the case for elderly households and households without children. In fact, children are a critical source of demand for the Internet and for broadband.

The National Telecommunications and Information Administration (NTIA) recently issued a report, A Nation Online: How Americans Are Expanding Their Use of the Internet, which found that a stunning 90 percent of children between the ages of 5 and 17 now use computers and that 75 percent of 14-to-17-year-olds and 65 percent of 10-to-13-year-olds use the Internet. Moreover, households with kids under 18 are more likely to access the Internet (62 percent) than are households with no children (53 percent).

Most Americans still view broadband as the luxury good it really is instead of the life necessity that some policymakers paint it to be.

The moral of the story is that to the extent that there is any sort of “digital divide” in this country, it is between the old and the young. We may just need to wait for the younger generation to grow up and acquire wallets and purses before broadband demand really intensifies.

But beyond the generation gap issue, other demand-side factors are holding down broadband adoption rates. For example, residential penetration rates are being held down by the fact that broadband access in the workplace is often viewed as a substitute for household access. If I can get online at work for a few minutes during the lunch hour each day and order goods from bandwidth-intensive sites such as Amazon.com, JCrew.com, or E-Bay, why do I really need an expensive broadband hookup at home at all? A narrowband dialup connection at home will give me easy access to e-mail and even allow me to get around most Web sites without much of a headache. I’ll just have to be patient when I hit the sites with lots of bells and whistles.

Another important demand-side factor that must be taken into account is the lack of so-called “killer aps,” or broadband applications that would encourage or even require consumers to purchase high-speed hookups for their homes. Although it makes many people (especially policymakers) uncomfortable to talk about it, the two most successful killer aps so far have been Napster and pornography. Like it or not, the illegal swapping of copyrighted music and the downloading of nudie pics has probably done more to encourage broadband subscription than any other online application thus far. While politicians work hard to rid the world of online file sharing and porn, they may actually be eliminating the only two services with enough appeal to convince consumers to take the broadband plunge.

But this certainly doesn’t count as the most serious obstacle policymakers have created to the growth of broadband markets. Regulation has played, and continues to play, a very important role in how service providers deploy broadband.

Regulatory roulette

Beyond the question of how much demand for broadband services really exists in the present marketplace, important supply-side questions remain the subject of intense debate as well. Many policymakers and members of the consuming public are asking why current providers are not doing more to roll out broadband service to the masses.

Regulation is certainly a big part of the supply-side problem. The primary problem that policymakers face in terms of stimulating increased broadband deployment is that the major service providers have decidedly different regulatory histories. Consider the radically different regulatory paradigms governing today’s major broadband providers.

  • Telephone companies have traditionally been designated as common carriers by federal, state, and local regulators. As common carriers, they have been expected to carry any and all traffic over their networks on a nondiscriminatory basis at uniform, publicly announced rates. At the federal level, the regulation of telephone companies generally falls under Title II of the Communications Act, and this regulation is carried out by the Common Carrier Bureau at the FCC. Today, telephone companies provide broadband service to Americans through DSL technologies that operate over the same copper cables that carry ordinary phone traffic. Telephone companies account for almost 30 percent of the current marketplace.
  • Cable companies have traditionally been more heavily regulated at the municipal level, because each cable company was quarantined to a local franchise area. Although they gained the exclusive right to serve these territories, many rate controls and programming requirements were traditionally required as well. But cable has not been treated as a common carrier. Rather, the industry has been free to make private (sometimes exclusive) deals with content providers on terms not announced to the public beforehand. At the federal level, cable regulations fall under Title VI of the Communications Act and are usually managed by the Cable Services Bureau at the FCC. Cable companies provide broadband service to Americans through cable modem technologies and are the leading provider of broadband, accounting for just under 70 percent of current users.
  • Satellite and wireless providers have been less heavily regulated than telephone and cable carriers, but many rules still govern the way this industry does business. The federal regulations these carriers face are found in various provisions of the Communications Act and subsequent statutes, but most oversight responsibilities fall to the Cable Services Bureau, which is ironic given the wire-free nature of satellite transmissions. The FCC’s Wireless Bureau also has a hand in the action. Like cable providers, satellite companies are considered private carriers rather than common carriers. Unlike cable and telephone companies, wireless carriers have not encountered as much direct regulation by state or local officials, given the more obvious interstate nature of the medium. (The exception to this is municipal zoning ordinances governing tower antenna placement, which continue to burden the industry.) Today, wireless providers offer broadband service to the public through a special satellite dish or receiving antenna and set-top box technologies. With the highest monthly subscription fees and the most expensive installation and equipment charges, satellite companies have captured less than 2 percent of the market.

These three industry sectors–telephony, cable, and satellite–are the primary providers of broadband connections to the home and business today. Although they use different transmission methods and technologies, they all essentially want to provide consumers with the same service: high-speed communications and data connectivity. And yet these providers are currently governed under completely different regulatory methodologies. FCC regulations are stuck in a regulatory time warp that lags behind current market realities by several decades, and regrettably the much-heralded Telecommunications Act of 1996 did nothing to alter the fundamental nature of these increasingly irrelevant and artificial legal distinctions.

The current regulatory arrangement means that firms attempting to offer comparable services are being regulated under dissimilar legal standards. It betrays the cardinal tenet of U.S. jurisprudence that everyone deserves equal treatment under the law, and the danger is that it could produce distorted market outcomes. Can these contradictory regulatory traditions be reconciled in such a way that no one player or industry segment has an unfair advantage over another? In theory, the answer is obviously yes, but in practice it will be quite difficult to implement.

Most favored nation

The public policy solution is to end this regulatory asymmetry not by “regulating up” to put everyone on equally difficult footing but rather by “deregulating down.” That is, to the extent legislators and regulators continue to set up ground rules for the industry at all, they should consider borrowing a page from trade law by adopting the equivalent of a “most favored nation” (MFN) clause for telecommunications. In a nutshell, this policy would state that: “Any communications carrier seeking to offer a new service or entering a new line of business should be regulated no more stringently than its least-regulated competitor.”

Such an MFN for telecommunications would ensure that regulatory parity exists within the telecommunications market as the lines between existing technologies and industry sectors continue to blur. Placing everyone on the same deregulated level playing field should be at the heart of telecommunications policy to ensure nondiscriminatory regulatory treatment of competing providers and technologies at all levels of government.

So much for theory. In practice, the difficulty is that deregulation of this industry is not popular with policymakers these days. In fact, the recent debate over broadband deregulation in Congress has been an incredibly heated affair, with all the industry players and special interests squaring off over the Internet Freedom and Broadband Deployment Act of 2001 (H.R. 1542). Sponsored by House Energy and Commerce Chairman Billy Tauzin (R-La.) and ranking member John Dingell, (D-Mich.), the Tauzin-Dingell bill would allow the Baby Bell companies, which offer local phone service, to provide customers with broadband services in the same way that cable and satellite companies are currently allowed to, free of the infrastructure-sharing provisions of the Telecom Act of 1996.

The Baby Bells are reluctant to make a large investment in broadband infrastructure if they will be forced to let their competitors use that infrastructure. In addition, under the current regulatory regime the Baby Bells are not certain whether or not they can offer broadband services to customers outside their local service areas. (They are clearly forbidden to offer phone services outside these areas.) Passage of the Tauzin-Dingell bill would resolve both of these questions and clear the way for the Baby Bells to make a major commitment to broadband service.

Cable companies, the large long-distance telephone companies, and small telecom resellers vociferously oppose the Tauzin-Dingell measure, arguing that it would represent the end of the road for them. These companies would prefer not to have to compete head-to-head with the Baby Bells or to have to invest in their own infrastructure. An intense lobbying, public relations, and advertising campaign was initiated to halt the measure, and the Bell forces responded in kind with stepped-up lobbying and ads of their own. On February 27, after months of acrimonious debate, the House of Representatives passed the Tauzin-Dingell measure with some last-minute modifications. But it will likely prove to be a Pyrrhic victory for the Bells, because of the bill’s limited support in the Senate. Sen. Ernest Hollings (D-S.C.), a longtime enemy of the Baby Bells and deregulation in general, has vowed to kill the bill when it enters the Senate Commerce Committee, which he rules with an iron hand.

The bottom line is that deregulation has a very limited constituency in today’s Congress. Even proposals aimed at leveling the playing field for all providers, which is essentially what the Tauzin-Dingell bill does, have very limited chances of achieving final passage in today’s legislative environment. This is especially the case given that carriers seem unwilling to forgo the insatiable urge to lobby for old and new rules that hinder their competitors at every turn. Remember Cold War-era “MAD” policy? The escalating lobbying and public relations battles have become the telecom industry’s equivalent of Mutually Assured Destruction: If you screw us, we’ll screw you.

What Congress might do

Although it appears increasingly unlikely that Congress will take the steps needed to clean up the confusing and contradictory legal quagmire the industry finds itself stuck in, a new class of broadband bills is simultaneously being considered that would authorize a variety of promotional efforts to spur broadband deployment. For example, Senate Majority Leader Tom Daschle (D-S.D.) has argued that government “should create tax credits, grants, and loans to make broadband service as universal tomorrow as telephone access is today.” And even though recent government reports such as the NTIA and FCC studies cited above illustrate that computer and broadband usage rates have been increasing, Sen. Patrick Leahy (D-Vt.) reacted to this news by noting, “I suspect we have to add money in the Congress” to boost the availability of these technologies.

Daschle and Leahy are not along in calling for government to take a more active role in promoting broadband use. In fact, one bill, the Broadband Internet Access Act (S. 88, H.R. 267) has attracted almost 200 sponsors in the House and over 60 in the Senate. The bill would create a tax incentive regime to encourage communications companies to deploy broadband services more rapidly and broadly throughout the United States. The measure would offer a 10 to 20 percent tax credit to companies that roll out broadband services to rural communities and “underserved” areas.

Policymakers need to undertake some much-needed regulatory housecleaning by removing outmoded rules and service designations from the books.

Whereas the Broadband Internet Access Act would represent an indirect government subsidy, more direct subsidization efforts are also on the table. Last fall, the bipartisan duo of Rep. Leonard Boswell (D-Iowa) and Rep. Tom Osborne (R-Neb.) introduced the Rural America Technology Enhancement (RATE) Act (H.R. 2847), which would authorize $3 billion in loans and credits for rural broadband deployment programs and establish an Office of Rural Technology within the Department of Agriculture to coordinate technology grants and programs. And these bills are just the tip of the iceberg; there are dozens more like them in Congress.

Welcome to the beginning of what might best be dubbed the “Digital New Deal.” In recent years, legislators and regulators have been promoting a veritable alphabet soup of government programs aimed at jump-starting the provision of broadband, especially in rural areas. Although only a handful of such programs have been implemented thus far, many of these proposals could eventually see the light of day, because so many policymakers seem eager to do something to put themselves at the front of a technological development that they see as inevitable. Deregulating the market so that this development can follow its own course apparently will not enable them to take credit for what happens.

The problem, however, is that Washington could end up spending a lot of taxpayer money with little gain to show for it, because it is unlikely that tax credits or subsidies would catalyze as much deployment as policymakers imagine. In the absence of fundamental regulatory reform, many providers are unlikely to increase deployment efforts significantly. Although a 10 to 20 percent tax credit may help offset some of the capital costs associated with network expansion, many carriers will still be reluctant to deploy new services unless a simple and level legal playing field exists.

If legislators sweetened the deal by offering industry a 30 to 50 percent credit to offset deployment costs, it might make a difference. But if subsidy proposals reached that level, it would beg the question: Why not just let government build the broadband infrastructure in rural areas itself? Ironically, that is exactly what a number of small rural municipal governments are proposing to do today. Frustrated with the slow pace of rollout by private companies, some local authorities are proposing to turn broadband into yet another lackluster public utility. Private companies are fighting the proposal, of course, but consumers should also be skeptical of efforts by city hall to model their broadband company after the local garbage or sewage service. Is that really a good model for such a dynamic industry? Fortunately, these broadband municipalization efforts have not made much progress. Most legislators still want to begin by jump-starting private-sector deployment through promotional efforts.

In the end, perhaps the most damning argument against a tax credit and subsidy regime for broadband is the threat of politicizing this industry by allowing legislators and regulators to become more involved in how broadband services are provided. By inviting government in to act as a market facilitator, the industry runs the risk of being subjected to greater bureaucratic micromanagement. Experience teaches us that what government subsidizes, it often ends up regulating as well. It is not hard to imagine that such tinkering with the daily affairs of industry might become more commonplace if Washington starts subsidizing broadband deployment. That explains why T. J. Rodgers, president and CEO of Cypress Semiconductor, has cautioned the high-tech industry about “normalizing relations” with Washington, D.C. As Rodgers says, “The political scene in Washington is antithetical to the core values that drive our success in the international marketplace and risks converting entrepreneurs into statist businessmen.”

Solving the broadband paradox will require steps by policymakers, industry providers, and consumers alike if the dream of ubiquitous high-speed access is to become a reality. Policymakers need to undertake some much-needed regulatory housecleaning by removing outmoded rules and service designations from the books. New spending initiatives or subsidization efforts are unlikely to stimulate much broadband deployment. What companies, innovators, and investors really need is legal clarity: an uncluttered, level playing field for all players that does not attempt to micromanage this complicated sector or its many current and emerging technologies.

Industry players will need to undertake additional educational efforts to make consumers aware of what broadband can do for them. Ultimately, however, as important as such educational efforts are, there is no substitute for intense facilities-based investment and competition to help drive down cost, which still seems to be the biggest sticking point for most consumers. New killer aps will hopefully also come along soon that can help drive consumer demand in the same way that Napster and the brief file-sharing craze did before litigation shut down this practice.

Finally, consumers will need to be patient and understand that there is no such thing as a free broadband lunch. It will take time for these technologies to spread to everyone, and even as they become more ubiquitously available, they will be fairly expensive to obtain at first. Cost will come down with the passage of time (if demand is really there), but you’ll still need to shell out a fair chunk of change to satisfy your need for speed online.

Putting Teeth in the Biological Weapons Convention

In the fall of 2001, letters sent through the U.S. mail containing powdered anthrax bacterial spores killed five people, infected 18 others, disrupted the operations of all three branches of the U.S. government, forced tens of thousands to take prophylactic antibiotics, and frightened millions of Americans. This incident demonstrated the deadly potential of bioterrorism and raised serious concerns about the nation’s ability to defend itself against more extensive attacks.

The anthrax crisis also made more urgent the need to prevent the acquisition and use of biological and toxin weapons–disease-causing microorganisms and natural poisons–by states as well as terrorist organizations. At present, the legal prohibitions on biological warfare (BW) are flawed and incomplete. The 1925 Geneva Protocol bans the use in war of biological weapons but not their possession, whereas the 1972 Biological and Toxin Weapons Convention (BWC) prohibits the development, possession, stockpiling, and transfer of biological and toxin agents and delivery systems intended for hostile purposes or armed conflict, but it has no formal measures to ensure that the treaty’s 144 member countries are complying with the ban.

Because the materials and equipment used to develop and produce biological weapons are dual use (suitable both for military ends and legitimate commercial or therapeutic applications), the BWC bans microbial and toxin agents “of types and quantities that have no justification for prophylactic, protective, or other peaceful purposes.” Given this inherent ambiguity, assessing compliance with the BWC is extremely difficult and often involves a judgment of intent. Moreover, the treaty lacks effective verification measures: Article VI offers only the weak option of petitioning the United Nations (UN) Security Council to investigate cases of suspected noncompliance, which has proven to be a political nonstarter.

The BWC’s lack of teeth has reduced the treaty to little more than a gentleman’s agreement. About 12 countries, including parties to the BWC such as Iraq, Iran, Libya, China, Russia, and North Korea, are considered to have active biological warfare (BW) programs. This level of noncompliance suggests that the legal restraints enshrined in the treaty are not strong enough to prevent some governments from acquiring and stockpiling biological weapons. Thus, it is essential to take concrete steps to reinforce the biological disarmament regime.

Despite the fall 2001 terrorist attacks, however, recent efforts to adopt monitoring and enforcement provisions for the BWC have gone nowhere. Indeed, negotiations at a meeting of the BWC member states in November and December 2001 broke down, in large part because of actions taken by the United States. Instead of the mandatory and multilateral approach favored by most Western countries, the Bush administration has advocated a package of nine voluntary measures, most of which would be implemented through national legislation. Although the administration’s approach has some value for combating bioterrorism, it is doubtful that it will be sufficient to address the problem of state-level noncompliance with the biological weapons ban.

History of failure

Efforts to strengthen the BWC have a long history. At the Second and Third Review Conferences of the treaty in 1986 and 1991, member states sought to bolster the BWC by adopting a set of confidence-building measures that were politically rather than legally binding. These measures included exchanges of information on vaccine production plants (which can be easily diverted to the production of BW agents), past activities related to BW, national biodefense programs, and unusual outbreaks of disease. The level of participation in the confidence-building measures, however, has been poor. From 1987 to 1995, only 70 of the then 139 member states of the BWC submitted data declarations, and only 11 took part in all rounds of the information exchange.

In 1992 and 1993, a panel of government verification experts known as VEREX assessed the feasibility of monitoring the BWC from a scientific and technical standpoint. The VEREX group concluded that a combination of declarations and on-site inspections could enhance confidence in treaty compliance and deter violations. Consequently, BWC member states established the Ad Hoc Group in September 1994 to “strengthen the effectiveness of and improve the implementation” of the BWC, including the development of a system of on-site inspections to monitor compliance with the treaty. In July 1997, the Ad Hoc Group began to negotiate a compliance protocol to supplement the BWC, but differences in national positions were significant.

In April 2001, the chairman of the Ad Hoc Group, Tibor Tóth of Hungary, proposed a compromise text that sought to bridge the gaps. It contained these key elements:

  • Mandatory declarations of biodefense and biotechnology facilities and activities that could be diverted most easily to the development or production of biological weapons
  • Consultation procedures to clarify questions that might arise from declarations, including the possibility of on-site visits
  • Transparency visits to randomly selected declared facilities to check the accuracy of declarations
  • Short-notice challenge investigations of facilities suspected of violating the BWC, declared or undeclared, as well as field investigations of alleged biological weapons use

Although most delegations were prepared to accept the chairman’s text as a basis for further negotiations, the new Bush administration conducted an interagency review and found 37 serious problems with the document. U.S. officials argued that the draft protocol would be ineffective in catching violators, create a false sense of security, impose undue burdens on the U.S. pharmaceutical and biotechnology industries, and could compromise government biodefense secrets. Other delegations countered that the protocol, though flawed, offered a reasonable balance between conducting on-site inspections intrusive enough to increase confidence in compliance and safeguarding legitimate national security and business information. Nevertheless, the United States declared that the draft protocol could not be salvaged and withdrew from the Ad Hoc Group negotiations on July 25, 2001. Although other countries considered proceeding with the talks without the United States, they quickly rejected this option. Instead, the mandate of the Ad Hoc Group was preserved so that the negotiations could potentially resume at a later date, after a change in the political climate.

The next opportunity for progress came in November 2001, during the Fifth Review Conference of the BWC in Geneva. On the first day of the meeting, John Bolton, the head of the U.S. delegation and Under Secretary of State for Arms Control and International Security, accused six states of violating the BWC: Iran, Iraq, Libya, and North Korea (all parties to the BWC); Syria (which has signed but not ratified); and Sudan (which has neither signed nor ratified). Bolton said that additional unnamed member states were also violating the convention and insisted that the review conference address the problem of noncompliance.

As an alternative to the BWC Protocol, which Bolton bluntly stated was “dead, and is not going to be resurrected,” the United States offered an “alternatives package” of nine voluntary measures that could be implemented through national legislation or by adapting existing multilateral mechanisms. They include:

  • Criminalizing the acquisition and possession of biological weapons
  • Restricting access to dangerous microbial pathogens and toxins
  • Supporting the World Health Organization’s (WHO’s) global system for disease surveillance and control
  • Establishing an ethical code of conduct for scientists working with dangerous pathogens
  • Contributing to an international team that would provide assistance in fighting outbreaks of infectious disease
  • Strengthening an existing UN mechanism for conducting field investigations of alleged biological weapons use so that BWC member states would be required to accept investigations on their territory.
Controls on access to dangerous pathogens must be implemented internationally, not just in the United States.

Several delegations welcomed the U.S. package but suggested that it did not go far enough and that some type of legally binding agreement among BWC member states would be necessary. On the last day of the conference, however, the United States insisted that the mandate of the Ad Hoc Group be terminated, thereby eliminating the sole forum for negotiating multilateral measures to strengthen the treaty. Because preserving the Ad Hoc Group’s mandate had long been a bottom line for many delegations, the U.S. proposal prevented the consensus needed to adopt a politically binding Final Declaration. In a desperate bid to prevent the BWC Review Conference from failing completely, chairman Tóth suspended the meeting for a year.

The Review Conference will reconvene in Geneva on November 11, 2002. Whether progress can be achieved before the conference resumes remains to be seen. One problem is that the United States continues to resist any formal multilateral agreements, creating a split between Washington and other Western countries. Moreover, without the Ad Hoc Group, no multilateral forum exists to negotiate the ideas in the U.S. alternatives package. During the period preceding the resumption of the conference, it will be important for the participating states to hammer out their differences; creative thinking will be needed to find a way out of the current impasse.

The U.S. alternatives package

The Bush administration’s current approach to strengthening the BWC, which emphasizes voluntary national measures, may have some benefit in reducing the threat of bioterrorism, but it will not be sufficient to address the problem of state-level noncompliance. There are ways, however, in which the U.S. proposals could be improved.

The first set of measures proposed by the United States relates to Article IV of the BWC, which deals with national implementation. This article requires each member state, in accordance with its constitutional processes, to take any necessary steps to prohibit and prevent the activities banned by the BWC on its territory or anywhere under its jurisdiction. Because Article IV is vaguely worded, it has been interpreted in various ways, and few of the 144 BWC member states have enacted domestic implementing legislation imposing criminal penalties on individuals who engage in illicit biological weapons activities. Not until 1989 did the United States develop its own implementing legislation, the Biological Weapons Antiterrorism Act, which imposes criminal penalties up to life imprisonment, plus fines, for anyone who acquires a biological weapon or assists a foreign state or terrorist organization in doing so.

Under the new U.S. proposal, the legislatures of BWC member states that have not already done so would adopt domestic legislation criminalizing the acquisition, possession, and use of biological weapons. As a key element of such laws, states would improve their ability to extradite biological weapons fugitives to countries prepared to assume criminal jurisdiction, either by amending existing bilateral extradition treaties to include biological weapons offenses, or by arranging to extradite for BW offenses even when a bilateral treaty is not in place with the country seeking extradition. In addition, BWC member states would commit to adopt and implement strict national regulations for access to particularly dangerous pathogens, along with guidelines for the physical security and protection of culture collections and laboratory stocks.

Within the United States, the federal Centers for Disease Control and Prevention (CDC) regulates the interstate transport of 36 particularly hazardous human pathogens and toxins, permitting the transfer of these agents only between registered facilities that are equipped to handle them safely and have a legitimate reason for working with them. Similar regulations on transfers of dangerous plant and animal pathogens are administered by the U.S. Department of Agriculture’s Animal and Plant Health Inspection Service (APHIS). In response to the anthrax letter attacks, Congress is strengthening the statutory framework relating to biological weapons by extending the current controls on transfers of dangerous pathogens to prohibit the possession of such agents by unauthorized individuals or for other than peaceful purposes. Presumably, the U.S. government hopes that other nations will adopt similar legislation.

Enhanced UN field investigation powers should be part of a legally binding multilateral agreement.

The proposed U.S. measures to strengthen Article IV also include urging BWC member states to sensitize scientists to the risks of genetic engineering and to explore national oversight of high-risk experiments (see sidebar). In addition, states would be encouraged to develop and adopt a professional code of conduct for scientists working with pathogenic microorganisms, possibly building on existing ethical codes such as the Hippocratic oath.

A second set of measures in the U.S. package aims to strengthen the BWC’s Article VII, on assisting the victims of a biological attack, and Article X, on technical and scientific cooperation in the peaceful uses of biotechnology. The U.S.-proposed measures for assistance and cooperation would require member states to adopt and implement strict biosafety procedures for handling dangerous pathogens, based on those developed by WHO or equivalent national guidelines, and to enhance WHO’s capabilities for the global monitoring of infectious diseases. This latter measure could help to deter the covert use of biological weapons by detecting and containing the resulting outbreak at an early stage, thereby reducing its impact. An enhanced global disease surveillance system would also increase the probability that an epidemic arising from the deliberate release of a biological agent would be promptly investigated, recognized as unnatural in origin, and attributed to a state or terrorist organization. Further, the United States has proposed the creation of international rapid response teams that would provide emergency and investigative assistance, if required, in the event of a serious outbreak of infectious disease. BWC member states would be expected to indicate in advance what types of assistance they would be prepared to provide.

The third set of measures proposed by the United States is designed to strengthen the BWC’s Article V, on consultation and cooperation, by addressing concerns over treaty compliance. One proposed measure would augment the consultation procedures in Article V by creating a “voluntary cooperative mechanism” for clarifying and resolving compliance concerns by mutual consent, through exchanges of information, visits, and other procedures. The other measure would adapt a little-known procedure by which the UN secretary general can initiate field investigations of alleged chemical or BW incidents. If the secretary general determines that an allegation of use could constitute a violation of international law, he or she has the authority to assemble an international team of experts to conduct an objective scientific inquiry.

When the UN field investigations mechanism was first developed in 1980, the General Assembly or Security Council had to pass a resolution requesting the secretary general to launch an investigation. This procedure was used for investigations of alleged chemical warfare by the Soviet Union and its allies in Southeast Asia and Afghanistan in 1980-83, and by Iraq and Iran in 1984-88, during the Iran-Iraq War. Experience demonstrated, however, that it was essential to conduct a field investigation while the forensic evidence was still fresh, and that the procedure of requiring a UN body to make a formal request was too cumbersome and lengthy to permit a rapid response.

In view of this problem, on November 30, 1987, the General Assembly adopted Resolution 42/37 empowering the secretary general to launch, on his or her own authority, an immediate field investigation of any credible complaint of alleged chemical or biological weapons use. The Security Council adopted a similar resolution on August 26, 1988, making the secretary general the sole arbiter of which allegations to investigate and the level of effort devoted to each investigation. In 1992, the secretary general launched investigations of the alleged use of chemical weapons by RENAMO insurgents in Mozambique and by Armenian forces in Azerbaijan. In both cases, UN expert teams concluded that the allegations were false.

No UN field investigations have been requested since 1992. Now that the BWC Protocol negotiations have been placed on indefinite hold, however, the ability of the secretary general to initiate investigations of alleged biological weapons use could fill a major gap in the disarmament regime. Although all the UN investigations conducted to date have involved the alleged use of chemical or toxin warfare agents, the United States has proposed expanding the existing mechanism to cover suspicious outbreaks of infectious disease that might result from the covert development, production, testing, or use of biological weapons. The U.S. proposal would also require BWC member countries to accept UN investigations on their territory without right of refusal, which is not currently the case.

Will they work?

The various U.S. measures presented at the Fifth Review Conference are modest steps for reducing the threats of BW and bioterrorism, but they would do little to reinforce the biological disarmament regime. Although the Bush administration has good reason to be troubled by the evidence of widespread noncompliance by members of the BWC, the remedies it has proposed are not commensurate with the gravity of the problem.

A key weakness of relying almost exclusively on domestic legislation to address biosecurity concerns is that national laws cannot impose uniform international standards. Legislation criminalizing the possession and use of biological weapons by individuals may vary considerably from country to country, resulting in an uneven patchwork that could create loopholes and areas of lax enforcement exploitable by terrorists. Moreover, some states will fail to pass such laws or will not enforce them. As an alternative to national legislation, the Harvard Sussex Program on CBW Armament and Arms Limitation, a nongovernmental group, has developed a draft treaty criminalizing the possession and use of biological weapons. This text could serve as a starting point for multilateral negotiations to strengthen the BWC.

Similarly, tighter U.S. regulations on access to dangerous pathogens, although desirable, will not significantly reduce the global threat of bioterrorism unless such controls are implemented internationally. Hundreds of laboratories and companies throughout the world work with dangerous pathogens, yet restrictions on access vary from country to country. To harmonize these national regulations, the United States should pressure the UN General Assembly to negotiate a “Biosecurity Convention” requiring all participating states to impose uniform limits on access to dangerous pathogens, so that only bona fide scientists are authorized to work with these materials. In addition, the treaty should establish common international standards of biosafety and physical security for microbial culture collections that contain dangerous pathogens, whether they are under commercial, academic, or government auspices.

The U.S. proposal for an expanded global disease-monitoring system run by WHO could play an important, albeit indirect, role in strengthening the BWC. Nevertheless, it is essential that WHO’s public health activities not be linked explicitly to monitoring state compliance with the BWC. Because WHO epidemiologists conduct investigations of unusual disease outbreaks only at the invitation of the affected country, the organization must preserve its political neutrality; any suspicions about WHO’s motives could seriously compromise its ability to operate. Accordingly, although the U.S. proposal for greater funding of global disease surveillance is welcome, the money for this purpose should be provided directly in the WHO budget and kept separate from efforts to strengthen the BWC.

As for UN field investigations of alleged biological weapons use, the historical record suggests that such efforts can yield useful findings if they are carried out shortly after an alleged attack and if the expert group is granted full access to the affected sites and personnel. Under optimal conditions, small groups of three to five experts can carry out field investigations rapidly and cheaply. Nevertheless, it is unrealistic to expect UN member countries to waive all right of refusal and accept international investigations on their territory in the absence of a formal treaty that provides legally binding rights and obligations.

Without such a treaty, countries accused of using biological weapons could simply deny investigators access to the alleged attack sites and the affected populations. Because such countries would be under no legal obligation to cooperate with the UN, the political consequences of a refusal would be minimal. Indeed, UN investigations of chemical and toxin weapons use in Laos, Cambodia, and Afghanistan during the early 1980s failed to yield conclusive results because the accused countries refused the UN experts access to the alleged attack sites. If, however, the obligation to accept UN investigators were legally binding, a denial of access would have far more serious consequences, possibly leading to the imposition of economic sanctions on the refusing country.

Finally, although the existence of a measure to investigate allegations of use could help to deter countries from employing biological weapons, is it really desirable to wait until such weapons have been used before the secretary general can initiate an investigation? Would it not be preferable to prevent states or terrorists from developing, producing, and testing them in the first place? This more ambitious objective would mean granting the secretary general the authority to investigate not only the alleged use of biological weapons but also facilities suspected of their illicit development and production, an option that the U.S. proposal does not include. To this end, BWC member states should negotiate a legally binding agreement that obligates them to cooperate with UN field investigations on their territories of the alleged development, production, and use of biological weapons, as well as suspicious outbreaks of disease.

U.S. flexibility needed

The use of anthrax-tainted letters sent through the mail to kill and terrorize U.S. citizens has seriously challenged the international norm against BW and terrorism and made it imperative to strengthen the existing disarmament and nonproliferation regime. Although the Bush administration’s package of proposals for strengthening the BWC is a useful step in the right direction, the United States must show greater flexibility by permitting meaningful efforts to expand on these ideas and to negotiate them in a multilateral forum. Should the administration persist in its ideological opposition to multilateral arrangements of any kind, efforts to strengthen the BWC will probably remain in limbo indefinitely.

If the resumption of the Fifth Review Conference in November 2002 fails to yield constructive results, the credibility of the international biological disarmament regime will continue to erode. The consequences of such an outcome could be grim indeed. As the know-how and dual-use technologies needed to develop, produce, and deliver biological weapons continue to diffuse worldwide, the ability to inflict mass injury and death will cease to be a monopoly of the great powers and will become accessible to small groups of terrorists, and even to mentally deranged individuals. To prevent this nightmare from becoming a reality, the United States should join with other nations in taking urgent and meaningful steps to reinforce the BWC.


The Need for Oversight of Hazardous Research

In recent decades, dramatic advances in molecular biology and genetic engineering have yielded numerous benefits for human health and nutrition. But these breakthroughs also have a dark side: the potential to create more lethal instruments of biological warfare (BW) and terrorism. Harnessing the powerful knowledge emerging from the biosciences in a way that benefits humankind, while preventing its misuse, will require the scientific community to regulate itself.

An inadvertent discovery that became known in early 2001 highlights the risks. Australian scientists developing a contraceptive vaccine to control field mouse populations sought to enhance its effectiveness by inserting the gene for the immune regulatory protein interleukin-4 into mousepox virus, which served as the vaccine carrier. Insertion of the foreign gene unexpectedly transformed the normally benign virus into a strain that was highly lethal, even in mice that previously had been vaccinated against mousepox. The experiment demonstrated that the novel gene combinations created by genetic engineering can yield, on rare occasions, more virulent pathogens. Although the Australian team debated for months the wisdom of publishing their findings, they finally did so as a means of warning the scientific community.

As scientists obtain a flood of new insights into the molecular mechanisms of infection and the host immune response, this information could be applied for nefarious purposes. Indeed, until at least 1992, Soviet/Russian military biologists developed advanced BW agents by engineering pathogenic bacteria to be resistant to multiple antibiotics and vaccines, creating viral “chimeras” by combining militarily relevant traits from different viruses, and developing incapacitating or behavior-modifying agents based on natural brain chemicals.

In view of these troubling developments, the scientific community will have to address the problem of hazardous research–ideally through self-governance. Many scientists oppose any limits on scientific inquiry, but because public outrage over an accident involving a genetically engineered pathogen could compel Congress to impose draconian restrictions, it is in the interest of scientists to make their research safer. One precedent for self-regulation already exists: In February 1975, some 140 biologists, lawyers, and physicians met at the Asilomar Conference Center near Monterey, California, to discuss the risks of recombinant DNA technologies and to develop a set of research guidelines, overseen by a Recombinant DNA Advisory Committee (RAC).

To prevent the deliberate misuse of molecular biology for malicious purposes, the scientific community, working through professional societies and national academies of science, should negotiate a set of rules and procedures for the oversight of hazardous research in the fields of microbiology, infectious disease, veterinary medicine, and plant pathology. Regulated activities would include the cloning and transfer of toxin genes and virulence factors, the development of antibiotic- and vaccine-resistant microbial strains and genetically engineered toxins, and the engineering of “stealth” viruses to evade or manipulate human immune defenses.

The oversight mechanism should be global in scope and cover academic, industrial, and government research. Various models are under consideration. Under one approach, legitimate but high-risk research projects would be reviewed by a scientific oversight board, similar to the RAC but operating at the international level. Checks and balances would be needed to ensure that the international oversight board has the power and authority it requires to enforce the regulations, while preventing it from becoming corrupt and arbitrary, unduly constraining scientific freedom, or abusing its privileged access to sensitive and proprietary information. Furthermore, governments may be reluctant to grant an international body binding review authority over national biodefense programs. Simply requiring countries to notify the oversight board about activities and describing them in general terms may be all that can reasonably be accomplished.

Scientific journals will also need to develop guidelines for declining to publish research findings of direct relevance to offensive BW or terrorism, such as the Australian mousepox results. Because the ethos of the scientific community opposes censorship of any kind, a strong professional consensus will be needed to embargo data because its dissemination could be harmful to society. Given the complexity and sensitivity of these issues, the process of developing an international mechanism to regulate hazardous dual-use research will be long and difficult, requiring the active participation of a variety of stakeholders, including scientists, lawyers, and politicians from several countries.