Nuclear Technology’s Numerous Uses

We should not let unjustified fear of radiation create obstacles to continued progress and benefits.

In his 1953 “Atoms for Peace” address to the United Nations, President Dwight D. Eisenhower challenged scientists and engineers to harness the atom for humanitarian purposes in medicine, agriculture, and other non-power aspects of direct benefit. Half a century later, nuclear technology has had astounding economic and job impacts in the United States (see Table 1). The totals in terms of dollars and jobs are impressive, but perhaps the biggest revelation is that the atom has a substantially larger impact outside the nuclear power sector than in it.

Table 1.

Overall Impact of Nuclear Technology in the United States.a

1991 1995
Sales

(billion dollars)

Jobs

(million)

Sales

(billion dollars)

Jobs

(million)

Radiation 257 3.7 331 4.0
Nuclear Power 73 0.4 90 0.4
Total 330 4.1 421 4.4

a Using a multiplicative economic model that includes secondary revenue and jobs created by the primary sectors.

Perhaps the most significant success story over the past half-century in harnessing radiation to serve modern humanity is found in the field of medicine.

Sterilizing medical equipment. Radiation in high enough doses can kill microorganisms, so gamma radiation is used to sterilize dressings, surgical gloves, bandages, and other equipment routinely used during medical procedures. Today, well over half of all sterilized medical equipment used in modern U.S. hospitals has had radiation treatment. This is safer and cheaper than most other methods (such as steam) because it can be done after the item is packaged. Its sterile shelf life is practically infinite as long as the package is not opened.

New drug testing. Substantial testing must be done before new drugs are approved. This includes detecting how a product attacks a targeted disease and any possible side effects. Radioisotopes, because of their unique imaging characteristics (via particle emission), are ideally suited to deal with such questions–including material uptake, metabolism, distribution, and elimination of unwanted residues from the body. For at least 80 percent of the new drugs approved by the U.S. Food and Drug Administration (FDA) for medical use in the United States, radiation was a crucial component of their success in making it through the approval process. The International Atomic Energy Agency estimates that some 100 to 300 radiopharmaceuticals are in routine use throughout the world, and most are commercially available.

Diagnostic techniques. The earliest use of radiation in the medical field occurred in World War I, when portable x-rays helped field surgeons save many lives. Today, dental x-rays, chest x-rays, mammograms, and numerous other tests are used routinely in the medical and dental professions.

But x-rays, useful as they are, provide only a snapshot of a particular piece of the anatomy. The imaging properties of radioisotopes allow modern nuclear medical specialists to measure the activity of some specific physiological or biochemical function in the body as a function of time. Two of the most common technologies are single photon emission computed tomography (SPECT) and positron emission tomography (PET), which are used to detect cancer. Nuclear diagnostic techniques are now routinely used throughout the industrial world to determine anomalies in the heart, brain, kidneys, lungs, liver, breasts, and thyroid glands. Bone and joint disorders, along with spinal disorders, also benefit directly from this routine use of radioisotopes.

Therapeutic approaches. Until recently, the use of radiation to actually cure diseases was rather limited. One of the first therapeutic applications involved using iodine-131 (I131) to cure thyroid cancer. Since the thyroid has a special affinity for iodine, it is a relatively simple and straightforward matter to have a patient drink a carefully determined amount of I131 in a chemically palatable form of solution. The I131 then preferentially lodges in the thyroid gland, and the beta-emitting properties of this radioisotope subsequently target and destroy the thyroid malignancy. Since I131 has a half-life of eight days, it effectively disappears within a few weeks. Radiation is now used widely in the treatment of other cancers as well.

Most of the current therapeutic procedures deliver radiation to the patient externally. Accelerators are used to deliver either protons to the target or beta particles, which are normally directed onto a target that secondarily produces x-rays. Although this can have substantial benefits, it is impossible to keep the radiation from killing or impairing healthy tissue in the immediate vicinity, especially if the beam must pass through healthy tissue to reach the malignancy.

There are three principal ways to minimize injury to healthy cells from radiation therapy: (1) rotating the external beam around the patient, (2) creating radioisotopes only at the site of the malignancy, and (3) developing a method to deliver appropriate radioisotopes directly to the cancerous tissue.

An example of the first approach is the “gamma knife,” where the radioactive source is delivered from many directions, with the beam continuously focused on the targeted abnormality but with only small amounts of radiation passing through healthy tissue.

An example of the second approach is boron-neutron capture therapy. Boron is introduced into the patient as part of a special chemical carrier, so that it preferentially concentrates at the tumor site. A neutron beam is then focused on the boron, producing alpha particles that destroy the malignant cells only in the immediate vicinity of the concentrated boron. Because alpha particles are typically stopped within one human cell from their point of origin, the intense radiation damage is quite localized.

An example of the third approach is cell-directed radiation therapy. In order to have just localized damage, either beta or alpha emitters are needed. For solid tumors, one method of getting the radioisotope to the target is direct injection, assuming that the tumor is accessible. Brachytherapy, for instance, is used to treat prostate cancer: Several “seeds,” each containing a small amount of a radionuclide such as I125 or palladium-103 within a titanium capsule about the size of a grain of rice, are placed directly into the prostate gland, where they remain for life. Another cell-directed method involves attaching the radioisotope to a chemical that has a special affinity for the malignancy. This is called the monoclonal antibody (or “smart bullet”) approach. It is particularly suited for treating malignancies that are not confined to a particular spot, such as leukemia and non-Hodgkin’s disease.

Although many of these therapeutic applications of radiation are still in relatively early trial stages, the potential for success is enormous.

Agriculture

There remains a huge need to find new ways to increase food production and deliver food without spoilage to the growing global population.

Greater crop production. By attaching radioactive tracers to known quantities and varieties of fertilizers, it is possible to directly determine nutrient efficiencies as the labeled products are absorbed at critical locations in the plant. This can help to substantially reduce the amount of fertilizer required to produce robust yields.

Water is becoming quite scarce in many areas of the world. Neutron moisture gauges can measure the hydrogen component of water in both the plant and the surrounding soil. Thus, they are ideal instruments to help farmers make the best use of limited water supplies and are now found on many large U.S. farms.

A significant impediment to the medical community is the limited availability of new radioisotopes.

Another effective way to improve crop production is the development of new species–varieties that can better withstand heat or storm damage, have earlier maturing times to escape frost damage and allow crop rotation, resist diseases and droughts, provide better growth and yield patterns, deliver improved nutritional value, allow improved processing quality, and so on. Specialized radiation techniques–either directly bombarding seeds to alter DNA structures or irradiating crops to induce variations in the resulting seeds–can greatly accelerate the selection process. Radiation was the key element in the development of 89 percent of about 2,250 new crop varieties in the past 70 years; three-quarters of these irradiation-induced varieties were food crops, and the rest were ornamental flowers.

To date, China has benefited the most from using radiation to improve crop species. As of 2002, nearly 27 percent of the crops grown in China were developed this way. The equivalent figure elsewhere ranges from 11.5 percent in India and 9.3 percent in Russia to 7.8 percent in the Netherlands, 5.7 percent in the United States, and 5.3 percent in Japan. Indeed, the application of radiation techniques to the development of new crop varieties has probably provided the greatest global economic value of any form of harnessing radiation.

Improving animal health. Farm animals have likewise benefited from the application of radiation techniques. One key area concerns the optimal use of natural pastures or commercially prepared feeds. This is accomplished by labeling feed with specialty radioisotopes, such as carbon-14, and then tracing the paths of the food within the animal’s digestive system to determine where and how quickly it is broken down into body tissues or milk. This helps determine food’s nutritional value.

Radioisotopes have also been used to develop vaccines that are effective against certain animal diseases. For example, rinderpest (“cattle plague”)–a dreaded disease that has killed millions of cattle on African farms over the past four decades–has been eliminated using radiation-produced vaccinations in 16 of the 18 African countries previously infested.

Eradication of pests. One proven way to use nuclear technology in controlling or even eradicating unwanted insects is the sterile insect technique. This involves mass “factory breeding” of large numbers of the target insects and sterilizing the males by exposing them to gamma irradiation. When the sterilized males are released into infested areas and mate with wild females, no offspring are produced; if the sterilized males greatly outnumber the wild males in the area, the pest will be eradicated. Perhaps the largest success to date in using this technique occurred in Mexico. The Mediterranean fruit fly (the medfly) was knocked out entirely by 1981, and a screwworm eradication program yielded some $3 billion in benefits to the Mexican economy by 1991.

Food processing. Tragically, infestation and spoilage prevent one-fourth to one-half of the food produced in the world from reaching people. In addition, the food that does reach them can become unsafe to eat because of contaminants such as insects, molds, and bacteria. The U.S. Centers for Disease Control and Prevention estimated in 1999 that some 5,000 Americans die each year from food-borne diseases, and about 30 million others become sick, with about 300,000 of them requiring hospitalization.

Food irradiation involves subjecting food to carefully controlled amounts of ionizing radiation, such as beta particles or gamma rays, to break the DNA bonds of targeted pathogens. This is especially effective in destroying the reproductive cycle of bacteria and pathogens. It can eradicate unwanted organisms and specific non­spore-forming pathogenic microorganisms such as salmonella. It can also interfere with physiological processes such as sprouting in potatoes or onions. Thus the shelf life of many foods can be extended appreciably, and the presence of food-borne disease organisms such as Escherichia coli can be dramatically reduced. It is important to note that food processed by radiation does not become radioactive. At the doses used, it is impossible for beta, gamma, or x-rays to make food radioactive.

One of the prime advantages of food irradiation is that it sterilizes food without altering its form or taste. Older methods of food processing, which rely on heating or freezing, extreme drying or salting, or chemical treatments, generally do change the way food tastes and/or looks.

Widespread acceptance of food irradiation by the general public has been slow, but there are several signs–particularly in the United States–that consumer acceptance is not far away. Major supermarkets have signed on to offer irradiated meat at some stores. And the 2002 Farm Bill approved by Congress mandated that commodities such as meat and poultry that are treated by any technology approved by the U.S. Department of Agriculture and the FDA for improving food safety must be made available to the National School Lunch Program. Food irradiation is included in this mandate.

Industry

Although modern factories are the source of most of the products that we use daily, harnessed radiation in industry likely constitutes the most hidden use of this technology to ordinary citizens.

Process control and plant diagnostics. Because radiation has the ability to penetrate matter, industrial measurements can be made using radioisotopes without direct physical contact with either the source or the sensor. This allows online measurements to be made nondestructively while the material being measured is in motion. Measurements that are typically made in production lines include liquid levels, the density of materials in vessels and pipelines, the thickness of sheets and coatings, and the amounts and properties of materials on conveyor belts.

Radioisotope “thickness gauges” are unequalled in performance and are used extensively in almost every industry involved in producing sheet material (such as sheet metal or paper). It is highly unlikely that automation in such industries would be possible without the use of radioisotopes. Modern steel mills use such gauges to measure the thickness of rolled metals accurately at every moment during production. Paper mills use them to measure the density of wet pulp accurately in the first stages of paper production. These gauges are also frequently used in the food industry (such as in filling cereal boxes) and the oil industry, where determining the density of liquids, solids, or slurries is important.

Many radioactive tracer techniques have been used to investigate the reasons for reduced efficiency in modern plant operations. Tracers are now routinely used to measure flow rates, study mixing patterns, and locate leaks in heat exchangers and pipelines.

Materials development. Changes in molecular structure, including the inducement of desired chemical reactions, can be created in certain materials by appropriate exposure to radiation. For example, some polymers whose cross-linkage is induced by radiation can be tailored to shrink when heated. “Heat-shrink” products are now widely used in the packaging industry. Wire and cable insulated with radiation-cross-linked polyvinylchloride exhibit excellent resistance to heat and chemical attack and are widely used in the automobile, aerospace, and telecommunications industries. This process is being used increasingly to cross-link foamed polyethylene for thermal insulation and wood/plastic composites cured by gamma irradiation. The latter are gaining favor for flooring in department stores, airports, hotels, and churches because of their excellent abrasion resistance, the beauty of natural grains, and low maintenance costs. Many tire companies are now using radiation to vulcanize rubber for tire production as an improvement over the conventional use of sulfur.

Materials testing and inspection. One of the earliest industrial applications of radiation was to measure engine wear in the automotive industry. Irradiating the surface of an engine part under investigation (such as a ring or a gear) makes that portion of the metal radioactive. In tests to see which materials hold up best during operation, any wear on that part results in some radioactive material being deposited in the oil lubrication stream, where it can be readily measured.

Corrosion in pipes is a common problem in the industrial world. By moving a gamma source on one side of the pipe and a detector on the other, precise analyses can be made of the corrosion patterns. The activation property of radiation is used extensively to determine precise layers of special coatings, such as metal coatings to produce galvanized or tin-plated steel. The penetrating property of radiation is routinely used to check welds in crucial places such as airplane wings, housings for jet engines, and oil and gas pipelines.

Police and firefighters should be trained to deal with the real dangers of nuclear materials rather than perceived ones.

Energy. The coal industry benefits directly from using neutron gauges to measure and control the moisture content in coal and coke. And gamma sources are used to assay ash content as well as the combustion gases that go up the stack. It is important to determine the sulfur and nitrogen contents of coal, which are of considerable interest because of their contributions to acid rain. A new radiation technique called electron beam processing has been developed to remove both sulfur and nitrogen oxides from flue gas effectively and allow the products to be converted into a commercially viable agricultural fertilizer.

The oil industry also depends heavily on the use of radiation to conduct business. Borehole logging often employs nuclear probes to determine the potential for economically viable oil deposits in test wells. Radiation monitors are also widely used to determine malfunctions in refinery operations.

Personal care and conveniences. Anyone who wears either contact lenses or glasses benefits directly from radiation. The saline solution used to clean and store contact lenses is sterilized by gamma radiation. Neutron probes are used to ensure the proper moisture content during the making of the high-quality glass for eyeglasses. Cosmetics often use gamma radiation to rid products of any microbes before the product is packaged for public consumption. One helpful feature of radiation is that it changes the molecular structure of some materials to allow them to absorb huge amounts of liquid. Useful products that rely on this include air fresheners, disposable diapers, and tampons.

Other fields

Radiation has an increasing role in public safety, including airport screening, crime solving, and the deterrence of terrorism at points of entry. The use of americum-241 in smoke detectors has undoubtedly saved thousands of lives and prevented untold property damage. Radiation is also a key component for archaeological dating and the enhancement of precious gems. It is likewise used extensively for measuring and controlling sources of contamination to our environment.

Advanced space exploration would not be possible without radiation technology. Plutonium-238 is widely used as both a heat source to keep instruments from freezing and a source of electricity to run instruments and communication devices. Propulsion that uses nuclear-reactor rockets will be needed for manned voyages to other planets or their moons.

Finally, radiation technology provides a powerful fleet of tools to probe and unravel the mysteries of the basic structure of materials. From electron microscopes to very-high-energy accelerators, researchers have one of the best sets of technologies available to both explore existing matter and to synthesize new materials with highly desirable properties.

Obstacles to further progress

It is not a given that these impressive applications of nuclear technology will continue to expand. The public’s sometimes overriding fear of radiation has historically thwarted progress in many areas.

This fear has worked its way into numerous rules and regulations among federal and state agencies that have stymied progress and added considerable cost in several areas. For instance, the intense degree of regulation of almost anything having a nuclear component forces practitioners to use time-consuming and expensive accounting practices. Is the cost of such detailed recordkeeping really warranted when the expense of such attention to detail is ultimately passed on to the public? Some medical practitioners have reacted by moving into other areas of practice.

Perhaps a larger issue facing the nuclear medical industry is the disposition of low-level radioactive waste (LLW). There are currently only two U.S. sites licensed to receive this waste material: Richland, Washington, and Barnwell, South Carolina. Efforts to dispose of LLW in other areas have met strong public resistance, even though detailed scientific studies have shown such sites and associated operations to be much safer than essentially any other waste commodity. As a result, long-distance hauling of LLW from hundreds, if not thousands, of sites clearly adds to the cost of waste disposal today and hence to the cost of using this technology.

A significant impediment to the medical community is the limited availability of new radioisotopes. Currently, the United States imports at least 90 percent of the radioisotopes used in daily commerce. Further, the U.S. Department of Energy has reduced its research budget for producing and developing the use of new radioisotopes to zero. Some clinical studies to use new radioisotopes in curing cancer and other life-limiting diseases have been halted because of the lack of isotopes. Of perhaps greater concern, there are very few sources of alpha emitters, which have enormous potential for curing several types of cancer. Without a major change to revitalize the U.S. radioisotope program, nuclear medicine could stagnate. New techniques such as gene therapy will likely play an increasing role in the future, but even these often require the concurrent use of radiation technology in order to be successful.

Concern over radiation dangers is also thwarting progress in areas other than medicine. A classic case is food irradiation. This technology has been studied for more than four decades in several countries and has been declared safe and effective by essentially every relevant international scientific body. Yet only recently have U.S. federal approvals been given for its use on major food items. Irradiated foods sold in bulk, such as chicken or strawberries, are designated with the “radura” symbol on the package. Approval of the irradiation of seafood commodities is still pending, but efforts to gain it are under way. This is important because spoilage is quite high for many of these products. In a less visible aspect of agriculture, many thousands of acres of stubble are burned every year to cleanse fields of insects and other undesirable pests; gamma irradiation might provide a better soil-cleansing operation.

Even though radioisotopes are widely used in industry for gauges, the automation of processing, the manufacture of new materials, and so on, there is still reluctance in some quarters to use radiation because of concern that the public may be unwilling to accept products from a company utilizing radiation technology.

The U.S. space program has stagnated somewhat over the past decade or two because policymakers have been exceptionally cautious about developing nuclear propulsion engines. Fears of minute quantities of radioactive materials falling back to Earth after a mishap in space have sometimes overshadowed the fact that deep space exploration with sizable payloads simply cannot be accomplished without nuclear propulsion. The United States has launched only one nuclear reactor into space to date, but there are now plans to build and launch a substantially larger reactor as a key part of the Jupiter Icy Moon project.

Since the 9/11 tragedy, public fears have risen about terrorists’ possible use of a radiation dispersal device (RDD) or “dirty bomb.” Although this is clearly possible, the actual health effects from such a detonation would almost certainly be far less than imagined by a frightened public. Several scores of radioisotopes are being used to supply the benefits described throughout this article, but only a handful of radioisotopes pose a real potential hazard in an RDD. Hence, it is important that police and firefighters be trained to deal with real dangers rather than perceived ones, so that unnecessary panic does not take place if someone threatens to use such a device or actually sets one off.

It is clear that President Eisenhower’s challenge to use the atom for peace has been ably met. The benefits achieved over the past 50 years are nothing short of astonishing. One out of every three patients who enter a U.S. hospital or medical clinic, for instance, benefits directly from nuclear medicine. This translates into over 10 million nuclear medical procedures per year. Even broader beneficial impacts are possible, such as the successful adoption of food irradiation in normal commerce.

But there are significant obstacles to overcome whenever radiation is used, mainly because of lingering public fears. Perhaps the most significant success that the scientific community could strive for in this field in the next 50 years is to effectively engage the public and political leaders in a dialogue to eliminate unnecessary fears of radiation. Making people more aware of the enormous daily benefits of radiation is an important first step. If we could accomplish this, the dream of a better world that President Eisenhower set before us could be achieved many times over.

Recommended reading

B. S. Ahloowalia, M. Maluszynski, and Karin Nichtertein, “Global Impact of Mutation-Derived Varieties,” Joint FAO/IAEA Division of Nuclear Techniques in Food and Agriculture, International Atomic Energy Agency, Vienna, February 2003.

International Atomic Energy Association (IAEA), Induced Mutations and Molecular Techniques for Crop Production, proceedings of a symposium jointly organized by IAEA and FAO, Vienna, June 19 to 23, 1995.

Management Information Services, Economic and Employment Benefits of the Use of Nuclear Energy to Produce Electricity (1994).

Management Information Services, The Untold Story: Economic and Employment Benefits of the Use of Radioactive Materials (1994).

Management Information Services, The Untold Story: the Economic Benefits of Nuclear Technologies (1996).

“Irradiated Food, Good; Food-Borne Pathogens, Bad,” Nuclear News, July 2003, p. 62.

Jihui Qian, and Alexander Rogov, “Atoms for Peace: Extending the Benefits of Nuclear Technologies” (2003) (http://www.iaea.or.at/worldatom/Periodicals/Bull371/qian.html).

Uranium Information Centre, Australia (2003) ().

Kazauki Yanagisawa et al., “An Economic Index Regarding Market Creation of Products Obtained from Utilization of Radiation and Nuclear Energy (IV),” Journal of Nuclear Science and Technology 39, no. 10 (October 2002): 1120­1124.


Alan E. Waltar ([email protected]) is director of nuclear energy at Pacific Northwest National Laboratory in Richland, Washington.

America’s Coral Reefs: Awash With Problems

America’s coral reefs are in trouble. From the disease-ridden dying reefs of the Florida Keys, to the overfished and denuded reefs of Hawaii and the Virgin Islands, this country’s richest and most valued marine environment continues to decline in size, health, and productivity.

How can this be happening to one of our greatest natural treasures? Reefs are important recreational areas for many and are loved even by large portions of the public who have never had the opportunity to see their splendor firsthand. Coral reefs are sometimes referred to as the “rainforests of the sea,” because they teem with life and abound in diversity. But although only a small number of Americans have ever had rainforest experiences, many more have had the opportunity to dive and snorkel in nearshore reef areas. And in contrast to the obscured diversity of the forests, the gaudily colored fish and invertebrates of the reef are there for anyone to see. Once they have seen these treasures, the public becomes transformed from casual observers to strong advocates for their protection. This appeal explains why many zoos have rushed in recent years to display coral reef fishes and habitats, even in inland areas far from the coasts (such as Indianapolis, site of one of the largest of the country’s public aquaria). Coral reefs have local, national, and even global significance.

Even when one looks below the surface (pun intended) of the aesthetic appeal of reefs, it is easy to see why these biological communities command such respect. Coral reefs house the bulk of known marine biological diversity on the planet, yet they occur in relatively nutrient-poor waters of the tropics. Nutrient cycling is very efficient on reefs, and complicated predator-prey interactions maintain diversity and productivity. But the fine-tuned and complex nature of reefs may spell their doom: Remove some elements of this interconnected ecosystem, and things begin to unravel. Coral reefs are one of the few marine habitats that undergo disturbance-induced phase shifts: an almost irreversible phenomenon in which diverse reef ecosystems dominated by stony corals dramatically turn into biologically impoverished wastelands overgrown with algae. Worldwide, some 30 percent of reefs have been destroyed in the past few decades, and another 30 to 50 percent are expected to be destroyed in 20 years’ time if current trends continue. In the Caribbean region, where many of the reefs under U.S. jurisdiction can be found, coral cover has been reduced by 80 percent during the past three decades.

The U.S. government is fully aware of the value of these marine ecosystems and the fact that they are in trouble. In 1998, the Clinton administration established the U.S. Coral Reef Task Force (USCRTF), a high-level interagency group charged with examining reef problems and finding solutions. Executive Order 13089 stipulated that a task force be established to oversee that “all Federal agencies whose actions may affect U.S. coral reef ecosystems shall: (a) identify their actions that may affect U.S. coral reef ecosystems; (b) utilize their programs and authorities to protect and enhance the conditions of such ecosystems; and (c) to the extent permitted by law, ensure that any actions they authorize, fund, or carry out will not degrade the conditions of such ecosystems.” The task force comprises 11 federal agencies, plus corresponding state, territorial, and tribal authorities.

The USCRTF has looked for ways to better monitor the condition of reefs, share information, and coordinate management. Among the key government players are the National Oceanic and Atmospheric Administration (NOAA), Department of the Interior, Environmental Protection Agency (EPA), Department of Defense, Department of Agriculture, Department of Justice, Department of State, National Science Foundation (NSF), and NASA. Yet however well-intentioned this move on the part of government, coral reef health has continued to decline, and the USCRTF, while elevating the profile of the issue, has not been able to stem the degradation. The reasons for this ineffectiveness are complex and go beyond the “too little, too late” offered as the standard criticism. Although the response of the government may have indeed come too late for many of America’s reefs, the shortcomings of the task force have more to do with its reluctance to fully engage with the scientific community, take advantage of emerging technologies, and raise awareness about the consequences of reef degradation. If this is happening to our most treasured marine environments, what can the future be for our less-well-loved, less charismatic marine areas?

Threats to U.S. reefs

Even as we are becoming more fully aware of their enormous ecological and economic value, coral reefs are being lost in the United States, just as they are being destroyed in other parts of the world. Some 37 percent of all corals in Florida have died since 1996, and the incidence of coral disease at sampling sites there went up by 446 percent in the same short period. The U.S. has jurisdiction over a surprisingly large proportion of extant coral reefs, including the world’s third largest barrier reef in Florida; a vast tract of reef systems throughout the Hawaiian Islands; and extensive reefs in U.S. territories such as Puerto Rico, the U.S. Virgin Islands, Guam, American Samoa, and the Northern Mariana Islands. These reef resources contribute an estimated $375 billion to the U.S. economy annually, yet virtually all of these reef ecosystems are under threat, and many may be destroyed altogether in the coming decades.

Although in many parts of the world coral reefs are deliberately destroyed in the process of coastal development or to obtain construction materials, in the United States coral reefs suffer the classic death of a thousand cuts. They are strongly affected by eutrophication: the overfertilization of waters caused by the inflow of nutrients from fertilizer, sewage, and animal wastes. The overabundance of nutrients causes algae to overgrow and smother coral polyps; in extreme cases, leading to totally altered and biologically impoverished alternate ecosystems. Reefs are also sensitive to sediments that increase turbidity and reduce the sunlight reaching the coral colonies. (Though corals are animals, they have symbiotic dinoflagellates called zooxanthellae living within their tissues. The photosynthesis undertaken by these plant symbionts provides corals with the extra energy needed to create the calcium carbonate that forms their skeletons and thus the reef structure.) Sedimentation is a common threat to U.S. coral reefs, especially in areas where unregulated coastal development or deforestation causes soil runoff into nearshore waters.

Because energy flows in coral reef ecosystems are largely channeled into ecosystem maintenance and little surplus is available for harvest, reefs are highly sensitive to overfishing. The removal of grazing fishes, for instance, increases the likelihood that algae will dominate the reef, causing a subsequent decline in productivity and diversity. Reef communities denuded of even relatively small numbers of fishes are also less likely to recover from episodic bleaching events, because recruitment is inhibited by the lack of grazing fishes to create settlement space. Similarly, declines in sea turtle species such as hawksbill and green turtles negatively affect reef ecology. The removal of top predators such as reef sharks, jacks, and barracudas can also cause cascading effects resulting in reduced overall diversity and declines in productivity. Despite these impacts, very few coral reef areas of the United States have fishing regulations expressly designed to prevent these ecological cascading effects from occurring. In fact, most people would be surprised to find out that even in seemingly protected reefs, such as those that occur within the Virgin Islands Biosphere Reserve around St. John, U.S.V.I., almost all forms of recreational and commercial fishing are allowed.

Coral reefs are also extremely vulnerable to changes in their ambient environment, having narrow tolerance ranges in temperature and salinity. Warming affects both coral polyp physiology and the pH of seawater, which in turn affects the calcification rates of hard corals and their ability to create reef structure. For this reason, even a slight warming of sea temperatures has dramatic effects, especially when coupled with other negative impacts such as eutrophication and overfishing. There is some indication that warming sea temperatures may render coral colonies vulnerable to the spread of disease or to increased mortality in response to normally nonpathogenic viruses and bacteria. The spread of known coral diseases and the emergence of new, even more debilitating diseases are alarming phenomena in the Florida Keys reefs and underlie many of the die-back episodes there in the past decade.

There is not yet widespread recognition of the grave dangers U.S. reefs face.

The effects of warming are most clearly manifested in coral bleaching. Bleaching is an event in which the zooxanthellae of the corals, which give corals their beautiful colors, are expelled from the coral polyps, leaving the colonies white. Bleached corals cannot lay down calcium carbonate skeletons and thus enter a period of stasis. A bleached coral is not necessarily a dead coral, however, and corals have been known to recover from bleaching events (we also know from paleoarcheology that bleaching is a natural event that preceded greenhouse gas-related warming of the atmosphere). Because some reefs do fully recover after bleaching, it is difficult to predict what consequences warming events such as periodic El Niños will have on the long-term health of any reef. This uncertainty has been seized on by both doomsayers and naysayers in the debate about the future of reefs: The doomsayers declare that the majority of reefs face certain death from bleaching, while the naysayers claim that bleaching is not only natural but adaptive. However, one thing is absolutely clear: Stressed reefs have a heightened sensitivity to temperature changes and are far less likely to recover from bleaching events. And with a few exceptions (some parts of the northwest Hawaiian Islands and Palmyra Atoll, for instance), all of the coral reefs in the United States are highly stressed by a combination of land-based sources of pollution, overfishing, and the destruction of habitats that are ecologically critical to reef communities, such as seagrass beds and mangrove forests. This does not bode well for a future in which sea temperatures will undoubtedly continue to rise.

These losses affect more than our personal environmental sensibilities. Reefs support some of the most important industries in the United States and the rest of the world: 5 percent of world commercial fisheries are reef-based, and over 50 percent of U.S. federally managed fishery species depend on reefs during some part of their life cycle. Herman Cesar, Lauretta Burke, and Lida Pet-Soede argue in a recent monograph on the economics of coral reef degradation that the costs of better managing reefs are far outweighed by the net benefits provided by reefs. In the Florida Keys, for example, they claim that a proposed wastewater treatment plant that would mitigate many of the threats to the Florida Keys reef tract would cost $60 to $70 million in capital costs and about $4 million in annual maintenance costs. At the same time, the benefits to the local population (estimated to be greater than the net present value of $700 million) would far eclipse the outlays. In Hawaii and the reef-fringed territories, coastal tourism is tightly coupled to intact reefs. Reefs in these regions not only provide tourist destinations, they also play important roles in controlling beach erosion and buffering land from storms. In such places, it is easy to see how an investment in better reef protection would be a small cost in contrast to the great benefits provided by sustained tourism revenues.

Inadequate responses

The failure to respond to the coral reef crisis in this country has to do with many factors:

Incomplete understanding of the problem and communication failures. Although there is an appreciation of the crisis worldwide, there is still reluctance on the part of some U.S. managers to consider the crisis “our problem.” Everyone is quick to lament the destruction of Southeast Asian and Indian Ocean reefs by dynamite fishers, or the use of cyanide in collecting coral reef fish in the Philippines, but the reefs under U.S. jurisdiction have hardly fared better. In the past decade, we have seen a slow awakening to the problems facing U.S. reefs, but the response has been to collect more data, slowly and painstakingly. At first independently, and then in a more coordinated fashion with the establishment of the USCRTF, government agencies have made greater efforts to monitor reefs in certain regions, but the massive amounts of data collected often create problems in data interpretation and management. Too little emphasis has been placed on either synthesizing the data collected or collecting data in new ways to make it more relevant for conservation. Lacking a synthesis or periodic syntheses, we end up burying our heads in the sand about what is happening to our coral reefs.

The USCRTF and the government agencies it represents have not actively looked for ways to partner with academic, scientific, and nongovernmental organizations to take advantage of information being collected and disseminated by them. Instead, the government has relied almost solely on the efforts of its own scientists. Many of its scientists, such as Charles Birkeland of the U.S. Geological Survey, are indeed world leaders in coral reef ecology and management, but collectively the research being undertaken by government agencies is either substandard, too conservative, or both. Virtually every new advance in coral reef ecosystem understanding has been made not by government scientists but by academics or researchers in the private sector. U.S. government scientists have not explored the potential of new technologies such as biochemical markers that indicate reef stress (pioneered by the private sector), nor have they properly harnessed the remote sensing technologies they have deployed in order to improve reef surveillance.

Even the knowledge that has been gained is inadequately communicated to the public and to decisionmakers. Part of the problem has been the rush to oversimplify what is actually a very complex set of issues, in the hopes that decisionmakers higher up will take both notice and action. In the Florida Keys, for instance, advocates for improving the water quality of the nearshore environment have fought against the restoration of the Florida Everglades, arguing that the increased water flows into Florida Bay would bring higher concentrations of pollutants to the reef tract. In casting the reef problems in such a simplistic light, proponents of singular solutions actually impede responsible government agencies from tackling reef problems head-on and in the comprehensive manner that is required.

The U.S. government has serious shortcomings when it comes to communicating and raising awareness about complicated environmental issues. For this reason, it would behoove the USCRTF to partner with organizations that have good outreach mechanisms in place, such as environmental groups. Such public-private partnerships would also ease the financial burden of the cash-strapped government agencies, allowing them to spend funds in short supply on management and on measuring management efficacy.

Poor use of cutting-edge science and the at-large scientific community. Although the United States is one of the most technologically advanced countries in the world, it has not adequately harnessed science to address the coral reef crisis. In a 1999 article in the journal Marine and Freshwater Research, Michael Risk compares the response of the scientific community to the coral reef crisis with its response to two other crises affecting the United States: acid rain in the Northern Hemisphere and eutrophication of the Great Lakes. Risk argues that whereas there was effective engagement of the scientific community in tackling the latter two issues, neither U.S. nor international scientists have helped craft an effective response to the large-scale death of reefs.

Risk is right to ask why science has failed coral reefs, but I take issue with his assessment of the nation’s inadequate response to the crisis. It is not the fault of the scientific community that the government has been slow to act to save reefs, but rather the fault of government in not knowing how to use science and scientists effectively. Decisionmakers have not engaged the scientific community and have failed to heed what scientific advice has been put forward. For instance, the government did not fully mobilize nongovernmental academic institutions and conservation organizations to help draft its National Action Plan to Conserve Coral Reefs, and as a result the plan has been criticized as lacking in rigor and ambition. It is telling that a World Bank project to undertake global coral reef-targeted research, which assembles international teams of leading researchers to address critical issues of bleaching, disease, connectivity, remote sensing, modeling, and restoration, has a paucity of U.S. government scientists in all six of the working groups. This targeted research project is crucial: It intends to identify the key questions that managers need to have answered in order to better protect reefs, and it aims to do intensive applied research to answer those questions.

The costs of managing reefs are far outweighed by the net benefits provided by reefs.

The National Action Plan to Conserve Coral Reefs was produced by the USCRTF and published on March 2, 2000. It is a general document describing why coral reefs are important and what needs to be done to protect them. There are two main sections: understanding coral reef ecosystems and reducing the adverse impacts of human activities. The first section discusses four action items: (1) create comprehensive maps, (2) conduct long-term monitoring and assessment, (3) support strategic research, and (4) incorporate the human dimension (undertake economic valuation, etc.). The second section is a bit more ambitious: (1) create and expand a network of marine protected areas (MPAs), (2) reduce impacts of extractive uses, (3) reduce habitat destruction, (4) reduce pollution, (5) restore damaged reefs, (6) reduce global threats to coral reefs, (7) reduce impacts from international trade in coral reef species, (8) improve federal accountability and coordination, and (9) create an informed public. All well and good, but despite its moniker the action plan provides almost no guidance on how to do these things. It called for each federal agency to develop implementation plans (required by Executive Order 13089) by June 2000. However, those plans were only to cover fiscal years 2001 and 2002, and the plans were never formalized or made public. The USCRTF recognized that a greater investment needed to be made to figure out how each agency was going to contribute to carrying out the action plan and pushed agencies to develop post-2002 strategies. To date, only the Department of Defense and NOAA have completed such strategies. NOAA’s plan is embodied in its National Strategy for Conserving Coral Reefs document published in September 2002. Both the action plan and the NOAA strategy are available on the USCRTF Web site (www.coralreef.gov).

The plans put forward by the USCRTF, however, place far too much emphasis on monitoring and mapping and far too little emphasis on abating threats and effectively managing reefs. The focus of research has been to monitor existing conditions rather than to set up applied experiments that would tell us which threats are most critical to tackle. This is not to say that all government research has been worthless. Regular monitoring in the Florida Keys allowed NOAA to understand the alarming “blackwater” event in January 2003 (in which fishermen noticed black water, later found to be a combination of a plankton bloom and tannins, moving from the Everglades toward the reefs) and reassure the public that it was a natural event, because they had several years of monitoring information with which they could hindcast. Similarly, the mapping investment, although too high a priority, has led to some interesting revelations: There are newly discovered reefs in the northeastern portion of the Gulf of Mexico that are now on the public’s radar screen, for instance.

Although the USCRTF has recognized the importance of MPAs in conserving reefs, it has not given the government agencies that have responsibility for implementation guidance on how to optimally design these protected areas. The action plan thus codifies a dangerous tendency to use simplistic formulae for designing protected areas. The plan states these as its goals: “establish additional no-take ecological reserves to provide needed protection to a balanced suite of representative U.S. coral reefs and associated habitats, with a goal to protect at least 5% of all coral reefs . . . by 2002; at least 10% by 2005, and at least 20% by 2010.” By adopting a policy of conserving 20 percent of reef areas within no-take reserves, without requiring planners to fully understand the threats to a particular reef and without guiding planners to locate such protected areas in the most ecologically critical areas, the plan pushes decisionmakers to implement ineffective MPAs, thus squandering opportunities for real conservation. In some jurisdictions, these area targets have already been reached, with 20 percent of reef areas set aside as no-take zones, but because these areas were chosen more for their ease of establishment and less for their ecological importance, little conservation has been accomplished. In a true display of lack of ambition and creativity, the USCRTF and its agencies have not considered using ocean zoning outside of MPAs to conserve reefs, and the MPA directives remain an old-school, one-size-fits-all approach.

Despite its moniker, the National Action Plan provides almost no guidance on how to do the things it calls for.

Poor governmental coordination and lots of infighting. Since its formation in June of 1998, the USCRTF has made some strides toward better monitoring, information sharing, and management coordination for reefs under U.S. jurisdiction. This is no minor feat, because until the task force was established, no effort had been made to promote communication and cooperation between the multitude of agencies and bureaus that each have a role to play in coral reef management. NOAA’s National Ocean Service and National Marine Fisheries Service, the Department of the Interior’s U.S. Fish and Wildlife Service, and the EPA are the key players in the USCRTF, but also important are the National Parks Service, DOD, Department of Agriculture, Department of Justice, Department of State, NSF, and NASA. Although the major players (in particular, NOAA and the Department of the Interior) are engaged in internecine warfare over territorial claims and access to funding, some of the more minor players have taken their charge very seriously. DOD, for example, has developed its own plan for conserving the reefs under its jurisdiction, which include some of the most pristine reefs in the nation, such as the reefs of Johnston Atoll in the central Pacific.

Unlike many terrestrial habitats, coral reefs suffer both from human activity that directly affects the marine environment (such as dredging, fishing, and marine tourism) and from activity on land that has an indirect but highly insidious effect on reef health and productivity. Thus, in order to better understand and manage reefs, it is imperative that the United States continues, and now strengthens, coordinating mechanisms between the various government entities that control the wide array of human activities that damage reefs.

The USCRTF now has a roadmap to increase understanding about coral reefs and better protect them from further destruction, embodied in the National Action Plan to Conserve Coral Reefs. A subsequent report prepared by NOAA, in cooperation with the USCRTF, was submitted to Congress in 2002. The 156-page National Coral Reef Action Strategy provides a nationwide status report on implementation of the National Action Plan to Conserve Coral Reefs and the Coral Reef Conservation Act of 2000.

Will the USCRTF now be able to do what it could not in the first five years of its existence: stem the tide of degradation affecting U.S. coral reefs? Or is the U.S. government merely creating a façade of improved management, while government researchers and managers continue to work in isolation from cutting-edge researchers in U.S. academe, nongovernmental organizations, and international institutions? Will new policy developments, such as the administration’s support for broad environmental exemptions for DOD’s military training and antiterrorism operations, act to wholly undermine any substantive progress made by the USCRTF and the government agencies it represents?

Only time will answer these questions with certainty, but the initial impressions are not promising. The National Action Plan to Conserve Coral Reefs is too heavily invested in relatively easily accomplished activities such as mapping the nation’s coral reefs, and its formulaic and simplistic approach to creating MPAs will not likely result in meaningful protection. Already overburdened and underfunded agencies are not getting the political mentoring they need to ensure that appropriations will be sufficient to allow them to carry out their mandates under these plans. Without public-private partnerships and private-sector financial support, too many elements of the plan will fall by the wayside. Neither the Action Plan nor the NOAA strategy provide adequate information on the true choices and tradeoffs that decisionmakers will have to consider and act on in order to create a revolution in the way we manage coral reefs. And clearly a revolution is needed; business as usual will only continue to put U.S. coral reef ecosystems in harm’s way. In the end, the United States may fall far short of its goal of demonstrating how to effectively manage coral reefs in a way that all the world can see. Instead, it may well win the race to destroy the inventory of one of the world’s most diverse and precious environments.

Global forces are at play: the United States is not an island. Were the United States suddenly to act more effectively to protect reefs under its jurisdiction, our reef ecosystems would still be in some peril, for many reasons. First, many damaging activities occur out of sight, especially in remote reef areas with little or no surveillance. Second, the open nature of marine systems means that reefs are affected by the condition of the environment far from the reef tracts themselves. Sometimes larval propagules travel long distances, and the origin of recruits is tens or hundreds of kilometers away, in areas that could be entirely outside U.S. jurisdiction. Similarly, pollution from outside the U.S. can easily find its way to reefs within America’s borders. Finally, some threats to reefs are global in nature, such as rising temperatures caused by global warming. These threats will not diminish unless meaningful international agreements succeed in tackling the root causes of the threats. For all these reasons, protection of U.S. reefs will require more than administering the reefs within our borders; it will also require international negotiation, cooperation, and capacity-building.

Promising sign

Is there hope for U.S. coral reefs? Yes, as long as we can more fully engage the private sector and the scientific community in the struggle to save reefs, and at the same time convince decisionmakers of the need to take significant steps to protect these fragile ecosystems. It is a promising sign that in June 2003, NOAA, EPA, and the Department of the Interior convened a meeting in Hawaii to discuss coral bleaching and ways to gain better collaboration between the scientific community at large and the government agencies charged with managing reefs. The USCRTF is beginning to reach out to scientists involved in coral reef research and management outside the United States, such as the coral reef-targeted research working groups formed under the recent World Bank initiative. In this way, the U.S. government can begin to take advantage of the significant strides in scientific understanding that have been made by nongovernmental researchers, both in the United States and abroad.

New advances in technology may help coral reefs as well, and just in time. For instance, the Planetary Coral Reef Foundation has teamed up with the Massachusetts Institute of Technology and other academic institutions to attempt to launch a satellite that will provide real-time information about the condition of reefs worldwide. Such a satellite mission would make it possible to know the extent of coral bleaching and the presence of fishing operations anywhere in the world at any time. With such a system in place, traditional surveillance could be cut back, allowing money to be redirected toward conservation. At the same time, donors could get a better sense of where their investments are paying off in terms of real conservation of reefs and could identify trouble spots quickly enough to get funds flowing to places where emergency measures are needed.

With the full engagement of the scientific community and with partnering to remove some of the burden from beleaguered government agencies, managers will be able to tailor responses to the given threats at any reef location. Where fishing is deemed to be a major stressor, the United States will have to find the political will to manage reef-based fisheries more effectively. Where pollution (whether nutrients, toxics, debris, or alien species) is undermining reef health and resilience, coastal zone and agricultural agencies will have to work to find ways to reduce pollutant loading. Where visitor overuse and diver damage are issues, managers will have to look for ways to prevent people from loving reefs to death. And in all areas, managers will have to resist oversimplifying the situation and begin to better inform the public and decisionmakers about the hard choices to be made.

The coral reef crisis is indeed our problem. It affects our natural heritage and the livelihoods of a great number of our citizens. Only when the people in power recognize the magnitude of the problem will effective steps be taken to engage the wider scientific and conservation community in safeguarding reefs. When future generations look back at the dawn of the millennium and the environmental choices that were made, they will either curse us for letting one of nature’s most wondrous ecosystems be extinguished or praise us for recognizing the great value of reefs and moving to protect them. I hope it is the latter.

Archives – Winter 2004

Photo: NAS Archives

Before There Was Radar

During World War I, armies on both sides used sound location to detect enemy aircraft flying out of visual range. The job required observers competent in localizing moving sources of sound. In order to test and train prospective sound locators, University of Iowa physicist George Stewart, working under the auspices of the National Research Council, devised a test apparatus called an operations recorder, pictured above. The device, which used available materials such as a bicycle wheel and stand, rubber tubes, a wooden trough, and standard army geophones, recorded both the location of moving sound (created by bubbles of air passing through water) and the test subject’s estimates of their location.

Saving species

In these days of unremittingly bad news about Earth’s environment–in particular the state of its ecosystems and species–what a pleasant prospect to encounter a book with the title Win-Win Ecology. As one of those who have added to the litany of books and articles (and even a permanent museum installation) devoted to the causes and consequences of the mounting carnage of ecosystem degradation and species loss, I couldn’t help but feel a ray of hope even before cracking open Win-Win Ecology. And if I came away not as fully buoyed with renewed confidence as I might have wished, I did find plenty of food for thought and some bold and insightful messages about what we can realistically expect to do to cope with what would be, in the coming decades, the sixth mass extinction to engulf Earth in the past half-billion years.

Coping is the right word to describe Michael Rosenzweig’s message here. A well-known and respected ecologist at the University of Arizona, Rosenzweig wants us all to become a little more realistic about what we have done and continue to do in transforming the surface of the planet. (He says that we have altered 95 percent of Earth’s surface so far, which is more than I would have liked to hear.) Humans have to cope with the realities of what has already happened and will likely continue. We especially have to get used to the idea that extinction has happened in the past, is certainly happening now, and will inevitably continue no matter what we do and hope. The best we can do is to slow the extinction rate, letting those species that cannot adjust to the presence of 6 billion-plus humans go to their inevitable doom, while helping the survivors to go on.

But according to Rosenzweig, it is not just we humans who must cope; Earth’s species must also learn to get along with us. He says there are two kinds of species: those that can’t stand our presence and those that can either coexist with us or, with a little help, learn to coexist with us. Nature, he points out, is incredibly resilient. The explosion of black bear, white-tailed deer, and coyotes in suburban and even urban areas shows that some species driven back by the initial onslaught of human contact can indeed adjust their behavior to the realities of human presence. Rosenzweig is by no means proposing a new baseline definition of biodiversity that would include just starlings, house sparrows, Norway rats, and other species that thrive in human-dominated environments. He has in mind something a bit more ambitious and encompassing.

Key to what Rosenzweig calls reconciliation ecology is conscious human management of the environment. He contrasts his ideas with what he presents–somewhat unfairly–as two earlier, inadequate (if not wholly outmoded) cornerstones of conservation biology: reservation and restoration ecology. Rosenzweig argues that reservation ecology, which seeks to wall off putatively pristine tracts from human use, is unrealistic because truly pristine habitats no longer exist and because it is intended to preclude human management. But this is a bit disingenuous. After all, the act of walling off something is a thoroughly human action, however unrealistic and doomed to failure it might be.

Somewhat more realistic in his view, restoration ecology seeks to revert degraded ecosystems to their original state; its realism deriving mostly from the overt, conscious involvement of human beings rather than from any hope of recapturing the halcyon days before the human onslaught. But best of all, in Rosenzweig’s eyes, is the science of inventing, establishing, and maintaining new habitats to conserve species diversity in places where people live, work, or play. This is his definition of reconciliation ecology.

I cannot think of any prominent conservationist (scientist or impassioned layperson) who would any longer disagree with the idea that to be successful in the long run, any conservation effort simply must fold in the economic realities of the people actually living in or on the periphery of the real estate in question. And every serious conservation effort I’ve encountered since the mid-1980s (in Africa, Madagascar, and South America, as well as the United States) has taken as a given the need for human management of land that still has some biodiversity worth trying to save.

Indeed, with increasing frequency, wild species and humans are encouraged to tolerate one another. The Huab Valley in northwestern Namibia boasts a number of “reserves” owned by indigenous peoples who live in small farming communities. They have a traditional antipathy toward marauders, such as elephants, who eat and trample their vegetables and fruits, and lions and leopards, who prey on their cattle and goats. The task has become to convince the people owning these lands to tolerate a bit of marauding in exchange for the economic rewards of ownership, not just of fenced-in bits and pieces of land but of the whole domain, including its remnant populations of wild animals and plants.

This coexistence made it possible for me one recent morning to track on foot a mother rhino and her two-month-old calf and to enjoy a herd of desert elephants climbing a small mountain. The occasional human dwelling came into view, but that was, after all, part of the deal. One settlement had a small medical facility, a school, and other amenities of modern life that were paid for in part through an arrangement between the inhabitants and ecotourism providers.

Rosenzweig thinks this is the type of thing we should be doing. But his triad of reservation, restoration, and reconciliation ecology, although perhaps a didactically useful distinction in the conservation biology classroom, is already outmoded in describing the current complex world of conservation theory and practice. The necessity of integrating human economic needs with conservation has become blindingly obvious to everyone who cares about stemming the tide of species extinctions. Rosenzweig, because of his discussion about the failures of reservation and restoration ecology, is worried that he will be labeled anticonservation. He needn’t be. The world has already moved on.

Getting the science right

Rosenzweig is passionate not only about reconciliation ecology but also about the ecological science that underlies his conservation program. Although science is not the essential ingredient in formulating conservation policy and practice that Rosenzweig seems to think it is, it is nonetheless fascinating and ably presented in the book. Rosenzweig explains very well the theoretical and empirical studies that have revealed the simple and direct relationship between the size of a region and the number of species that can and do live there. Intuitively, conservationists have always known that it is habitat that is the key to understanding biological diversity: how much biodiversity is there, how it can be destroyed, and what sorts of things we might do to avoid further extinctions. Rosenzweig tells us why that is, although I was puzzled to read that he believes that the relationship between the size of a region and the number of species it can support was established only in the 1990s. Biologists such as Rosenzweig and myself all read MacArthur and Wilson’s 1967 Theory of Island Biogeography (curiously cited here only in a footnote), a treatise all about the relationship between the size of a place and the number of species it can and typically does contain, and the founding statement of this critical ecological pattern.

I also was a bit disappointed in Rosenzweig’s discussion of why diversity in any given-sized region stays more or less in equilibrium. Diversity, he tells us, is a balance between the rate of production of new species (speciation) and the extinction of species. Fair enough. But he says that speciation and extinction are linked because they are feedback processes: The more species there are, the greater the risk of extinction. But research in paleobiology in recent years has shown that there is a much tighter causal connection between extinction and evolution. Sandwiched between ecological disruption and recovery on the local scale, and the rare but powerful pulses of mass extinction followed by evolution on the grand scale, lies an entire category of events that I take to be the true guts of the evolution of life: Every once in a while, climate change or another form of physical disturbance overwhelms entire regions; regions large enough to encompass the entire geographic ranges of species. A threshold is reached, and the extinction of many species belonging to the many different lineages in the region occurs nearly simultaneously. This is accompanied, or followed shortly thereafter, by bursts of speciation among the survivors. Speciation seems almost contingent on extinction in such events, and in any case there is substantial evidence that the two processes are linked far more closely in dynamical terms than the vague feedback relationship that Rosenzweig describes.

But one must wait until the second part of the book to encounter the science in any depth. Intriguingly, Rosenzweig spends most of the book’s first half recounting story after story of deliberate and inadvertent examples of reconciliation ecology: an underwater restaurant in Israel, a power-plant haven for endangered American crocodiles, and the inadvertent-turned-deliberate value of a military base for saving an increasingly rare and endangered species of woodpecker. Often charming and engagingly told, Rosenzweig’s anecdotes take us around the globe while gently making his points about how we are to wrestle with the difficult problem of saving what is left of the living world.

Rosenzweig ends Win-Win Ecology with an eclectic excursion into the policy issues involved in implementing reconciliation ecology. He is against most governmental regulation that might support such efforts, for reasons that seem to reflect his personal political tastes more than any cogent argument. I find that odd, because conservation is quintessentially a social, not a scientific problem. We are collectively nations of laws, and however cumbersome and intrusive the legislative and overseeing judicial apparatus may in fact be, this is the general way in which conflicts of interest are resolved. Although underwater seafood restaurants and similar endeavors are striking examples of reconciliation ecology, they cannot provide the entire answer. The collective conservation effort needs all the help it can get.

Feeding the world

No other fundamental human deprivation affects as many people as does hunger, the chronic shortage of food that makes it impossible to live healthy and vigorous lives. Millions of people live at constant risk in regions of acute armed conflict, tens of millions are afflicted by AIDS, more than 100 million suffer from various chronic parasitic diseases, and malaria strikes about 500 million people a year. But at the beginning of the 21st century, according to the United Nations Food and Agricultural Organization (FAO), hunger was a daily experience for some 840 million people, and more than twice as many suffered from some form of malnutrition. Moreover, chronic hunger means much more than craving for food. It stands for compromised health, lack of physical vigor, limited intellectual achievement, and curtailed life expectancy. The coexistence of this debilitating condition with an obscene food surplus is one of the starkest illustrations of the divide between developed and developing countries. As Nobel laureate Amartya Sen points out, the existence of massive hunger is even more of a tragedy because it has been largely accepted as being essentially unpreventable.

This book is a worthwhile attempt at saying otherwise, as it details the global extent of this demeaning phenomenon and surveys and evaluates possible solutions. The authors are all economists [the first three are based at the University of Minnesota, whereas Mark Rosegrant works at the International Food Policy Research Institute (IFPRI) in Washington, D.C.] with long records of research, publishing, and policy-related activities in food and agriculture. Unlike many of their more theoretically oriented and narrow-minded colleagues, they are well aware of linkages between economic activities and the environment, health, research, and good governance, and are thus less inclined to offer any purist prescriptions.

Their starting point is the acceptance of globalization as a reality with both positive and destructive consequences. This leads to an automatic rejection of any radical antiglobalization sentiments as well as to the acknowledgment of the inadequacy of private markets in solving problems that involve public goods. The book’s text is evenly divided between two parts: the Challenge, which looks at the extent of hunger and at the linkages with science and institutional change, and Solutions, which deals in detail with the relevant policies and institutions and with the costs of the whole undertaking.

To me the book’s most valuable contribution is its stress on the importance of agricultural research. At a time when research payoffs in many intensively financed but mature fields–from the development of new drugs to improvements in energy efficiency–are often discouragingly low, agriculture remains one of the few areas with an enviable record of translating knowledge into tangible benefits. Although the returns vary depending on the type of crops, the merged result of hundreds of economic studies is that since 1958, agriculture has achieved an average real return of 77 percent, a figure that no other industry has come close to matching over such a long period.

Returns are high, but so are the lead times. This reality is made clear by the book’s most fascinating illustration, which shows the partial pedigree of the wheat variety Pioneer 2375 that was released by Pioneer-Hybrid International in 1989. The branching graph shows more than 50 ancestor cultivars from a dozen countries, with more than a third of the entire pedigree coming from varieties that were introduced before 1940. Accumulation of innovations is thus of immense importance in agricultural research, as is the necessity to realize that research benefits, impressive as they may be, may only begin to flow 10 to 15 years after launching a particular quest. And the book cites one other startling statistic: In the United States, 35 to 70 percent of the research effort may be needed just to maintain the previous research gains.

In view of these realities, two facts are particularly troubling. The authors discuss the first one; I will emphasize the other. The first is the slowdown in the growth of spending on this indispensable research in some parts of the world where it is needed the most. This investment is perhaps best measured in terms of spending per unit of agricultural gross domestic product. Since the mid-1970s, this indicator rose significantly in affluent countries and increased appreciably in Latin America and parts of Asia. But it remained constant in China (the world’s most populous nation, where a rapid dietary transition is leading to much higher demand for animal foodstuffs), and it actually fell in sub-Saharan Africa, the only part of the world that has not seen any real gains in food supply in recent decades (while being ravaged by AIDS). This is obviously worrisome because the future of agricultural productivity, much like its post-World War II past, lies in higher yields; and although the recent rate of yield growth has declined for some crops in some regions, there is no doubt that absolute ceilings are still far off and could conceivably be boosted higher through genetic manipulation.

The second reality is that neo-Luddites (mostly, but not exclusively, Europeans) are now fighting an ideologically motivated battle against genetically modified (GM) crops. In contrast to such counterproductive attitudes, the authors begin their brief discussion of GM crops by noting their popularity with farmers and then calmly weighing the known benefits and possible risks. They conclude by citing former IFPRI director Per Pinstrup-Andersen’s reminder that, although none of us has the ethical right to force the adoption of GM crops on anybody, neither do we have the right to block access to these cultivars, whose benefits should not accrue, as has been the case thus far, only to farmers in rich countries.

Because the development of GM crops is still in a relatively early stage and it will take time to better understand the linkages and complexities involved in their diffusion and use, I cannot blame the authors for not devoting more space to this critical topic. But I would like to stress what too few people seem to be willing to say openly: As the potential risks of GM crops have received widespread scientific and public attention, it has become politically incorrect to extol their enormous long-term rewards. I have no doubt that in long run their promise to revolutionize farming is incomparably greater than the transformation wrought by the Green Revolution.

However, the success of GM crops is by no means the sine qua non for ending hunger. Indeed, institutional changes that could be made today could bring about the conditions necessary for better access to food, the key cause of today’s hunger. Here the authors do an even more thorough job than they did in assessing the merits of agricultural research. They devote two lengthy sections to the institutional and policy changes that will be needed to end hunger.

In these chapters, readers familiar with the issue will be reminded of the strengths and weaknesses of international institutions in the quest to end hunger, the proper role of nongovernmental organizations, the misplaced national desires for self-sufficiency in food supply, and the centrality of female education and adequate water provision. Water shortages (and water pollution) are justifiably given special emphasis because they will affect the future of global food production more acutely than the lack or the degradation of farmland, although the latter is worrisome in many regions.

Those who have never thought about what it would take to end global hunger will benefit from these relatively brief but systematic and fairly comprehensive overviews. They will realize that achieving that goal is an undertaking that is vastly more complex than simply increasing the food supply, and that disparate factors such as the preservation of biodiversity must be addressed before there can be any lasting solution to the challenge. This is where the book really succeeds as it explains how achieving that seemingly simple goal will require a complex combination of diverse responses, adjustments, and transformations. And it shows what can be realistically done, which is not cutting the number of hungry people in half by 2015, as advocated by the FAO, but ending chronic mass hunger by 2050.

Finally, fewer and fewer recent reviews seem to pay any attention to a book’s appearance, but I have always considered this to be an important consideration in a handheld object that is supposed to be more than just printed paper. This book does well on all counts: an uncluttered design, clear graphs (only two global maps are too small and indistinct to be of real use), tables held to an appropriate amount, and black and white images, used as frontispieces in all sections, by Brazilian photographer Sebastião Salgado that include a number of captivating portraits and contrasts.

Society’s glue

Any country that is doing nation-building full time needs to be concerned about social cohesion: the elusive glue of civility, trust, and cooperation that is essential to a society’s health. Why do some nations thrive and others stall? What are the signs of social corrosion? How do failed states regenerate? And how do developed countries maintain cohesion as immigration increases, populations diversify, and politics polarize? These questions have received extensive attention recently as a result of Harvard public policy professor Robert Putnam’s influential essay Bowling Alone, which pointed to the decline in popularity of bowling leagues as indicative of the deterioration of communal activity in the United States.

George Mason University political scientist Francis Fukuyama’s research on trust extended these questions to economic outcomes in a number of countries. Fukuyama argues that social cohesion, as measured by the amount of trust people have in one another, is inextricably related to economic performance. He makes his case not with economic data but with historical case studies of low- and high-trust societies. He suggests that trust affects the ability of people and firms to organize efficiently and therefore their ability to prosper.

The essays in The Economic Implications of Social Cohesion test Fukuyama’s hypothesis for two high-trust countries, Canada and the United States. Using detailed economic data for both countries, this book asks whether regions in North America with higher levels of trust really are better off. Do they grow faster than the regions with less trust? Do people migrate from the regions with lower social cohesion to the regions with higher cohesion? At a micro level, does social cohesion in childhood produce better test scores in high school and affect economic well-being when the child is an adult?

Little quantitative work has been done on these questions, in part because of measurement issues. Social cohesion is an amorphous concept that is hard to measure; the economy is a kaleidoscope that is easy to mismeasure. But even if the concepts could be precisely measured, causality would be difficult to determine when there are so many intervening variables. Cross-sectional or cross-national comparisons can lead only to conclusions about correlations; conclusions about causality require highly refined longitudinal data and analyses that are not yet available in this field. Fortunately, the authors in this book are more intrepid in the face of these problems than are many of their peers. And they are more introspective about their conclusions.

The editor of this book, Lars Osberg, is a well-known Canadian economist who has spent much of his professional life worrying about the implications and mismeasurement of poverty. For this book, he has assembled a group of academics with different expertise on issues of social cohesion and economic growth. The Canadian focus of some of the chapters is combined with more general conclusions that are directly applicable to other developed countries and indirectly to developing countries.

“Trust in people ” is the most pervasive indicator used in this book as an indicator of social cohesion. Information about this indicator has been collected from a number of countries over time in the international social science survey, World Values Survey. However, the survey results are not always consistent with national surveys asking similar questions. Therefore, although the survey may be useful for comparing countries at one point in time, it may be less reliable in describing trends over time. No single indicator is likely to capture the complexity of social cohesion, however. Other indicators that are used include membership in organizations, participation in voluntary activities, and residential mobility.

John Helliwell, a prolific researcher on social capital, analyzes detailed comparisons of economic growth and social cohesion in Canadian provinces and the U.S. states. He finds, as others have, that people who are members of groups are more inclined to say that they trust people. This is regardless of the fact that in the United States, memberships tend to be more in religious organizations, whereas in Canada, memberships tend to be more in labor unions.

Levels of trust vary widely within both the United States and Canada. Trust is higher in the western and north central United States and in western Canada than in other regions. In both countries, regions with the highest levels of trust are economically more prosperous than other regions. They are also the destination of internal migration. However, per-capita economic growth has been the strongest in the poorest regions within each country, which is not surprising, because the economic health of regions within a country tends to converge over time. This illustrates the problem of finding the signal effect of social cohesion among more powerful macroeconomic forces.

At a more personal level, Shelley Phipps and Jane Friesen correlate indicators of social cohesion with the school performance of high-school children and their economic well-being as adults. These analyses suggest some intuitive relationships. The amount of social cohesion in a child’s life is positively correlated with how well they do on English exams in grade 12. And adults who describe a childhood with more social cohesion were more likely to be economically better off than those with a childhood with less cohesion. A chapter on health also suggests that high levels of trust in others are correlated with good adult health in at least seven countries, including Canada and the United States. This finding is consistent with a growing body of research on the social and psychological correlates of good health. At the same time, one local indicator of social cohesion–the number of memberships in organizations–was uncorrelated with health status.

Greater income inequality is associated with lower levels of social cohesion as measured by voluntary activity. But voluntary activity is an incomplete measure of social cohesion. People, such as immigrants, with large extended families often provide a lot of voluntary care to relatives, which is not counted as voluntary work in surveys. Also, a high level of volunteer work in a closed or rigid community can encourage social exclusion instead of social inclusion. The surveys of volunteerism cannot distinguish between these different outcomes.

Social cohesion and voluntary activities were also correlated with Protestantism in Canada. This is consistent with the arguments Max Weber made more than half a century ago. In The Protestant Ethic and the Spirit of Capitalism, Weber observed that Protestant sects seemed to create stronger community ties than did official or national religions. Seymour Martin Lipset, in examining Canada and the United States a decade ago, also noted that the sectarian nature of Protestantism made it more communal than Catholicism in both countries.

The chapter on volunteerism relied heavily on the Canadian National Survey of Giving, Volunteering and Participating. The 2000 edition of the survey, which was published too late to be used in this analysis, suggests that charitable contributions are increasing in Canada but that volunteering hours are declining because of strong recent economic growth. That suggests that the relationship of the economy and social cohesion, as measured by volunteerism, is more complicated than the authors had hoped.

Unanswered questions

The book has some important gaps. Perhaps the most important question left unaddressed is how large immigrant populations affect a country’s social cohesion. Eighteen percent of the Canadian population is foreign-born, as is 11 percent of the U.S. population. This book mentions that ethnic groups in both Canada and the United States have lower levels of trust than people who do not define themselves as ethnic, and that neighborhoods with large immigrant populations tend to have less social cohesion. But these observations are left hanging without more discussion or data.

Canada and the United States are facing the challenge of large immigrant populations differently. The United States still believes in its melting pot metaphor for the integration of new immigrant populations; Canada explicitly embraces a multi-pot approach that requires no melting. Canada offers free language classes to all immigrants and citizenship within three years. In the United States, the average time for naturalization is about 10 years. Which approach is more socially cohesive? Does it make a difference in the economic productivity of the new immigrants? The differences in immigration policies between the two countries create a natural experiment in social cohesion that goes unexplored in this book.

Another issue not addressed is how the changing family structure in Canada is affecting their social cohesion. Canadians marry less often than do Americans, but they live in common-law unions much more. In fact, Quebec is now tied with Sweden for the highest percentage of common-law unions in the world. These long-term common-law unions, which often include children, do not last as long on average as marriages. What effects do common-law unions have on social cohesion? Do common-law parents stay as involved with their children after a separation as do divorced parents? What about economic success? These are questions that will have to wait for the sequel to this book.

The importance of social cohesion for effective governance contrasts with the modest research literature on the topic. Data and methodological difficulties are only part of the problem. The more important issue is that social cohesion has only recently been recognized as a critical variable in the governance of political units at every stage of development. The bromides of economic growth have dominated the discussion of governance for so long that social cohesion has not been a major research topic. The linking of social cohesion and economic development in this book signals a new direction in research and offers a new set of public policy options for governments.

The title of the book is more ambitious than the individual chapters, and as with all collections, the chapters are uneven. It is frustrating that the science is still at the correlation stage of conclusions. But the chapters raise a number of important issues that will have to be addressed by subsequent work. Given that policymakers in many countries are realizing that nation-building is the hardest job they will ever have, their staffs need to read this book for insights on why it is so frustrating.

Breeding Sanity into the GM Food Debate

The debate over biotechnology seems to get ever more intractable, its costs higher, the disputants angrier. Europe is on the verge of requiring the tracking of all genetically modified (GM) food from farm to grocery store, despite strenuous opposition from the United States. Zimbabwe has rejected emergency food relief that contained unmilled GM corn. Germplasm used in agricultural R&D used to move freely among countries, but the flow has slowed to a trickle as developing countries rich in biodiversity restrict exports of wild plants, hoping to share in the profits from their hidden genes. With so much at stake, one would expect every issue in this arena to be exhaustively examined, argued, rebutted, and negotiated. But what is striking is how little actual debate there really is. There is conflict, but no engagement. Why? What keeps the debate a take-no-prisoners war rather than a spirited rational discussion?

Answering this question requires understanding what both sides think the most important issues are. This task is made more difficult because the views of biotech critics have not been widely disseminated in the popular press, whereas the views of advocates are well known. Describing the critics’ perspective and contrasting it with that of biotech advocates may shed light on the deadlock and on the reasons behind the rancor.

Biotech advocates typically cast the debate as about the safety of eating GM food and the possible ecological damage from growing it. They present the situation as a drama of knowledge battling against fear of the unknown. The fears of biotech critics, though, are about the social, political, economic, and cultural effects of biotechnology on global agriculture. The leading critics of biotechnology–the ETC Group, Genetic Resources Action International, Third World Network, Institute for Agriculture and Trade Policy, Oxfam, and others–are concerned about Third World poverty, globalization, economic justice, and many other social issues. To these critics, the biotech debate is not about the health effects of GM food but about control of the food supply. Their fears are not of the unknown but of the too well known: the concentration of industry power.

When biotech advocates recognize the social, economic, and political concerns of critics at all, they treat them as generic grievances against industrial agriculture or globalization, unconnected to biotechnology. But the crucial link that connects them is intellectual property rights, particularly patents on plants. It is because industrialized countries have elected to consider GM plants patentable that biotechnology threatens to take control of the food supply out of the public domain and hand it to multinational corporations. Until 1980, when a divided U.S. Supreme Court approved a patent on a GM bacterium that could digest oil, living creatures were outside the utility patent system. When the U.S. Patent and Trademark Office extended this decision from microorganisms to GM plants and animals, it opened the gates for a flood of patents, money, and power into the biotech industry. Thus, even though many worrisome trends in industrial agriculture date back 50 years or more, it is biotechnology that has extended, accelerated, and put the power of the state behind those trends through patents. This is what galvanizes the critics of biotechnology.

It is important to be clear that the critics are not opposed to biotechnology itself. This is one of the points most often misunderstood. Nor are they opposed to patents themselves. They are opposed to the patenting of plants, which biotechnology makes possible. They consider this new expansion of the patent system to be an ill-conceived transfer of the raw materials of all food production from the public domain to private control. Unfortunately, most biotech advocates seem to cavalierly dismiss such notions. They repeatedly turn away from the full range of issues and back to food safety, missing the opportunity for a genuinely productive exchange.

Fuel for conflict

The misunderstandings that flow from seeing the issues as exclusively ones of science and health are clear from comments by biotech advocate Norman Borlaug on the controversy over U.S. emergency food aid to Sudan that included GM food and seeds. According to Borlaug, Elfrieda Pschorn-Strauss of the South African organization Biowatch stated: “The U.S. does not need to grow nor donate genetically modified crops. To donate untested food and seed to Africa is not an act of kindness but an attempt to lure Africa into further dependence on foreign aid.” Tewolde Egziabher of Ethiopia stated: “Countries in the grip of a crisis are unlikely to have leverage to say, ‘This crop is contaminated; we’re not taking it.’ They should not be faced with a dilemma between allowing a million people to starve to death and allowing their genetic pool to be polluted.” But Borlaug responds that, “neither of these individuals offers any credible scientific evidence to back their false assertions concerning the safety of genetically modified foods.” Notice, though, that neither of them has made any assertions concerning the safety of GM foods at all. Their comments are about the social, economic, and cultural consequences when GM grain sent as food is inevitably planted.

This issue of world hunger and biotechnology is one of the angrier disputes between critics and advocates, with each side accusing the other of indifference to the world’s poor. Yet both sides are concerned for the poor; what they disagree about is the relative importance of productivity versus social and political factors. Biotech critics believe that advocates naively equate increased food production with decreased hunger. The critics never claim that production does not matter, only that it takes place in a social, political, and economic context that will determine how and whether the food that is produced actually reaches people. Since these are central issues for the critics, we should look at them more carefully: first at dependence and then at genetic pollution.

If plants are patentable, there are two important direct effects: Farmers can no longer save seeds from each year’s crop to plant next year, and the price of seeds goes up. Critics see a cascade of consequences. Some of the benefits of GM seeds can be gained only by using other expensive technology, such as herbicides. Farmers have to be able to front the money for expensive inputs each year while waiting until harvest to see any income. The danger of bankruptcy in a bad year goes up. Economies of scale become more important, resulting in larger farms. Big farms use more machinery and fewer workers, increasing the importance of capital relative to labor. The effect is first to lower wages, then to drive the rural poor off the land where they cannot make a living. The dependence on foreign aid cited by Pschorn-Strauss comes with this whole system of industrial agriculture that patents on GM seeds promote: high-input farming with fertilizers and chemicals and seeds that farmers need to buy every year from multinationals, and the credit system they will need, and so on.

But couldn’t farmers choose to opt out of the patent system and stick with unpatented seeds? Critics don’t think so. There are many reasons, but basically farmers who do not buy the new seeds will not reap the benefits of them, yet they will still pay some of the costs. Here is a simple example. Suppose the new seeds raise productivity. When more food comes to market, the price will go down. Farmers who do not use the more productive seed will not produce more, but will still receive the lower price, so their incomes will drop. But isn’t the economic pressure on farmers to join the system of patented commercial seeds justified by the economic benefits to those who do join? It depends. In theory, more productive technology creates extra value that someone can reap. It is one of the lessons of history, however, that the producers of inputs (seeds, fertilizers, and pesticides), not the farmers, usually reap those benefits. Since the onset of the era of industrial agriculture in the United States, the portion of the consumer’s food dollar going to farmers has dropped steadily, from 41 percent in 1950 to 19 percent in 2000, while the shares going to input suppliers and food processors have gone up, according to the U.S. Department of Agriculture.

Even if one thinks industrializing agriculture is a desirable pattern of development, care must still be taken, say biotech critics. If farmers have to buy seed, then the country’s food supply depends on the seed market successfully delivering seed to farmers. In the United States, with its highly developed infrastructure, this might be taken for granted. In much of the world, it cannot. Suppose next year there is war, inflation, banking system collapse, or rebels cutting supply lines. Seed might simply not show up, or might be delayed those crucial weeks during which it must be planted, or farmers may have no money to purchase it. Dependence on a theoretically more productive technology can result in farm production that is more vulnerable to political and economic shocks.

Biotech advocates simplistically present the debate as a drama of knowledge battling against fear of the unknown.

Now step back and look at the larger picture of international relations and the widening gap between rich and poor. Patents are first and foremost an extraction of rent by owners from users. Since the developed world overwhelmingly owns the patents on new technology–about 97 percent now, and with little expectation of change–patents are a transfer of rent from the poor to the rich. This regressive effect is exacerbated in agriculture, where the developing world is the farmers and the industrialized world is the input suppliers. Recall the history of industrial agriculture in the United States: The portion of the food dollar going to farmers goes down, and the portion taken by input suppliers goes up. Only the effects will be worse this time, because now the two groups will not be from the same country, so there will be no government to ameliorate the damage with social services, tax transfers, and other subsidies, such as the $30 billion in emergency bailouts of U.S. farmers during the past six years. Furthermore, developing countries depend far more on agriculture than do industrialized countries, with most of their people making a living from it. It is in agriculture (and textiles) where monopoly ownership of an essential input does the most harm to developing countries.

Consider now Egziabher’s concern about pollution of the genetic pool. Once modified genes are in the pool, farmers can no longer sell to consumers averse to GM food. Genetic pollution can cost countries their export markets. Already, Canadian organic canola farmers have lost theirs, and U.S. farmers are losing billions of dollars a year in exports to Europe. It can also complicate ownership of farmers’ production. In the fall of 2001, modified genes were found in corn strains in Oaxaca, Mexico, the center of the world’s maize genetic diversity. Most attention has focused on the threat to biodiversity, but think about the patent issue. For centuries, farmers all over the world have traded wild maize varieties and farmers’ varieties from Mexico in efforts to improve their corn crops. What will happen to that trading once those varieties are polluted with GM traits? Will a Brazilian farmer now be able to buy corn seed from Mexico that is genetically modified to resist insects, without royalties or other restrictions?

The shape of things to come was foreshadowed by the case of Percy Schmeiser, the Canadian canola farmer who was sued by Monsanto for having its patented Roundup Ready (RR) canola in his fields. He had not bought the company’s seed, but his crops were contaminated with RR canola one year–spilled, blown, or cross-pollinated–and he used seed from the polluted section to plant the next year’s crop. Nor did he benefit from the GM seed, since he did not spray with Roundup herbicide. Nevertheless, the Federal Court of Canada found him guilty of violating Monsanto’s patent. Common sense would indicate that either the Schmeiser decision must be reversed or else Schmeiser must have done something wrong. But here is the catch: It does not matter under patent law whether he intended to plant RR canola or even knew he had it. As the court pointed out, “intention is immaterial” under patent law. Patent law was not written with living things in mind, but rather to protect steam engines and better mousetraps. If a patented engine or mousetrap is found in my garage, it is a sure bet that it did not grow there. Someone must have violated the patent for it to be there. Thus, patent law cuts out the usual issues of actions and intent and simply places responsibility on the unlicensed possessor of a patented item. But the same reasoning does not make sense with plants: A patented plant may have grown there, with no wrongdoing by anyone, and the perverse result is that the victims of genetic pollution are legally culpable.

Beyond the dangers of farm concentration and cultural dislocation, of unreliable seed supplies and the threat of famine, and of increasing poverty and dependence in poor countries, critics also worry that patented and centralized seed production endangers biodiversity. Critics widely consider the biotechnology industry to be intent on eliminating the ancient practice of farmers saving and exchanging seeds, forcing farmers (and all of us who eat) to depend on commercial seed suppliers. If seeds are sold, not saved, though, the seed market will tend to produce only a few varieties; the rest will disappear. But diversity of plant varieties is the source of genetic traits for future crop improvement.

For better or worse, then, the biotech debate is a political debate, not just a scientific one. But reasoned argument is not the province of science alone. Critics of biotechnology raise serious and legitimate questions and support their positions with economic, political, and historical analysis. This does not mean they are right. We do not yet know if their analysis can stand up to serious scrutiny. The prominent public and media discussion of GM food has focused on the advocates’ preferred issue: the science of food safety. Often this is not just a preference but a vigorous exclusion of other issues. The United States, for example, negotiates aggressively in the World Trade Organization to prevent labeling or regulating of GM products on any grounds other than science-based safety. The advocates of biotechnology think they can win the safety issue and win the acceptance of biotechnology–and all that comes with it–by showing that eating GM food is safe.

The critics, too, have a political reason to keep the safety issue in the public eye: It is their major source of clout. Compared to the biotech industry, biotech critics have negligible resources of money or the political access money brings. Their only source of power is public support. But the public’s eyes glaze over at the word “patents.” The public does care about its safety, though, and somewhat about damage to the environment. Critics cannot drop the safety issue, on pain of becoming invisible, and they have limited ability to shift the focus onto their major issues.

Unexamined issues

The lack of vigor with which advocates have considered critics’ real concerns is illustrated in a series of articles by some of the world’s leading plant scientists published on the Web site of the American Society of Plant Biologists under the title Genetically Modified Crops: What Do the Scientists Say? Every scientist-author mentions the social, political, and economic issues, but few go far into them. Maarten J. Chrispeels gives the most detailed analysis: “Those who oppose GM crops are also quick to point out that this technology primarily benefits the multinational corporations that sell the seeds, and that these corporations are more interested in their own bottom line (always referred to as ‘corporate greed’) than in ‘feeding the poor.’ True enough, the big corporations are not working on the crops of the poor.” He describes the Green Revolution of the 1960s and 1970s, which greatly boosted agricultural productivity, and acknowledges some of its problems and the damage done to small farmers around the world, and even recognizes that these problems motivate critics. But rather than examining why the Green Revolution had these consequences and how we might make sure biotechnology does not repeat these mistakes, Chrispeels veers away from addressing the problems he has raised. Instead, he suggests “public/private partnerships,” despite admitting that such partnerships “must be based on mutual trust and common goals.” But since, as he acknowledges, “corporations are not working on the crops of the poor,” there is little reason to expect common goals or mutual trust. In the end, he has nothing to suggest but voluntary generosity by the private sector and a vague proposal for the creation of “an international clearinghouse or institute funded by the large multinationals (Am I a dreamer?) to foster such a partnership.”

This analysis is truncated just where biotech critics think it should begin: with the social and political conditions that can make public/private partnerships work for the common good, or fail. Critics are pessimistic because they believe that the circumstances that compromised the Green Revolution are much worse today than 40 years ago. Agricultural research has shifted dramatically from the public to the private sector, and corporations today have far more power and less accountability. Scientists and critics both want technology transfer to poor countries. But technology transfer is a difficult issue, and the strings attached in the transfer can be more important than the technology itself.

A central issue for almost all biotech critics is intellectual property rights, particularly patents on plants.

On the specific issue of patents on plants, the analyses by advocates prove similarly unsatisfying. One of the authors, Ingo Potrykus, had earlier been chided by civil society organizations for turning exclusive control of marketing “golden rice,” the result of millions of dollars of public and philanthropic research money, over to a private corporation, Syngenta. He writes in defense: “I was initially upset. It seemed to me unacceptable, even immoral, that an achievement based on research in a public institution and exclusively with public funding and designed for a humanitarian purpose was in the hands of those who had patented enabling technology earlier . . . At that time I was much tempted to join those who fight patenting. Upon further reflection, however, I realized that the development of ‘golden rice’ was only possible because of the existence of patents . . . Without patents, much of this technology would have remained secret. To take full advantage of available knowledge to benefit the poor, it does not make sense to fight against patenting.”

One often hears this claim, that patents motivate inventors to reveal their secrets. But this claim does not hold up, according to the following logic, which has been known since the 19th century. Inventors who patent will lose exclusive control after the patent term runs out. But if it is practical to keep an invention secret, then exclusive control can be maintained perpetually. So, a profit-maximizing firm will always prefer secrecy to patenting, whenever possible. Therefore, firms apply for patents only on inventions whose disclosure seems unavoidable, and thus patents have no effect on invention disclosure. Reality is messier than this, of course, but basically this is a powerful argument, and the idea that patents prevent secrecy is a minor player in patent theory. Thus, when Potrykus cites the threat of secrecy as his reason for supporting the patenting and exclusive licensing of golden rice, he is citing one of the conceptually and historically weakest justifications for patents. Yet the dismissal of “those who fight patenting” by scientists dabbling in patent theory helps prevent this vital topic from getting the wider attention that it needs.

Magnitude of the stakes

Many biotech advocates argue that biotechnology is not so different from the cultivation and breeding of plants that farmers have done for thousands of years. This is true. One of the scientist-authors, Channapatna Prakash, illustrates the evolution of crop plants with a picture of modern corn next to its wild maize ancestor, a tiny, scraggly, unappetizing cob. The argument is that humans already manipulate nature drastically, so critics are silly to be upset about the addition of a few genes, a tiny incremental change in a long history of plant breeding. This would be correct, if the issue were manipulation of nature–but it is not. The important point is that millions of farmers invested centuries of work in developing corn all the way from that scraggly relative to the modern wonder that it is, never allowing patents or private control of the plant germplasm. Now, on the basis of a tiny incremental addition, multinational corporations are trying to take ownership of this most basic of all public goods. This–not fringe complaints about manipulating nature–is the issue for biotech critics: that biotechnology is allowing the private takeover of our common heritage, the work of all the farmers of history.

This may seem like an exaggeration of the power of patents. After all, a patent’s term is limited. What is at stake, though, is not a temporary monopoly but a change in the rules of ownership, so that what has always been public will now always be private. Once plants are patentable, there will be the same constant stream of small improvements there has been for thousands of years, but now each improvement will start a new patent term. The issue, again, is control. Patents entitle their owners to control the uses of information, backed by the power of the state. If the current privatization trend continues, then owning patents in the information economy will be like owning land was in an agrarian economy, or owning the means of production was in an industrial economy. The biotech industry and its critics have both recognized this; the scientists who still think this is a battle between knowledge and ignorance have not.

To understand the magnitude of the stakes, consider the view of one group of biotech critics: supporters of sustainable agriculture. In their mind, they have been guarding the traditions of responsible farming through 60 years of dominance by agribusiness. Though they have been losing the competition in the marketplace, they see that as caused by industrial agriculture’s being massively subsidized, which they believe society cannot keep doing forever. In recent years, with growing hostility to crop subsidies and demand for organic food booming, sustainable agriculture seems to be finally coming into its own. But now its advocates see the rules being changed: The very companies that have driven industrial agriculture, the chemical and pesticide companies, are now claiming patent ownership of the germplasm that all farmers need.

This example points at the deeper sources of alarm among biotech critics, sources that revolve around the freedom of minority or eccentric or niche activities (or non-Western cultures) to compete against the mainstream. Advocates of patents counter that there has been no change of rules, that the patent system always in principle included plants (and everything else). Their argument is that plant science just did not allow enough specificity to meet the statutory requirements of patent law, until the advent of biotechnology. To critics, this is irrelevant. The issue is not whether plant science can meet patent requirements but whether patent monopolies have any business in this arena at all.

Consider this analogy: Suppose a sports trainer is allowed to patent a new exercise regimen. (Though this example is imaginary, anyone familiar with the rapid expansion of the patent system during the past 20 years knows it is not so implausible.) Now, you see your neighbor working out in the park, follow her example, and are soon charged with patent infringement. What is your intuitive reaction to this scenario? I think most people would be outraged, seeing this as a shocking and intolerable intrusion of the patent system into their lives, monitoring and controlling their behavior way beyond its legitimate sphere. Such, I believe, is the intuition of critics of utility patents on plants.

The point of this analogy is to separate two issues: whether statutory patent requirements can be interpreted to cover a given “technology,” and whether the patent system should have jurisdiction over that sort of technology. Although patent advocates claim that the patent system has no bounds other than what can meet its requirements, biotechnology has already created one issue where the patent system is running up against other jurisdictions: the 13th Amendment outlawing slavery and the owning of humans. The U.S. Patent and Trademark Office has rejected several applications on those grounds. The imaginary exercise patent illustrates, as does the antislavery law, that we do not accept that the patent system should have no limits; the question is what those limits should be.

Critics of patenting plants do not see themselves as enemies of the patent system, any more than sustainable farming advocates see themselves as enemies of agriculture. They believe they are safeguarding responsible practices in the face of corporate abuse. Their concern is that patents on plants are less a stimulant to creativity than a tool for plundering the public domain. Many patent scholars see the system shifting, from the patent monopoly being an exception that encourages and rewards exceptional innovation, to that monopoly being the business-as-usual norm that protects corporate investment in routine R&D. Far from challenging the patent system as a whole, critics of patenting plants base their position on a fundamental principle of the patent system: The government gives patents to increase, not decrease, the public domain. This idea has always been codified as the “novelty” requirement, probably the most basic idea of patents: You can only patent something new; you cannot take what is already there and transfer it from the public domain to private ownership. The principle is reinforced by the requirement that inventors disclose details so that their inventions will enrich society’s public knowledge. Extending the patent system to cover plants, however, is a dramatic impoverishing of the public domain.

Advocates do have a response, as seen by extending the exercise analogy. Suppose most people exercise at health clubs, which purchase site licenses for their members to use patented cutting-edge programs. With more money to be made from developing training programs, more innovation in training occurs, which benefits health club members and so causes overall public health to go up. The modest cost of this improvement in public health is merely that we must protect the investment of trainers and health clubs by cracking down on those fringe elements of society who insist on exercising privately or who live in rural areas far from health clubs.

Moving ahead will require of scientists a respect for other disciplines that has so far been lacking in the biotech debate.

What do biotech critics find so wrong with this line of reasoning? We must uncover the intuition that lies here, because it explains a lot of the passion that infuses this debate. Why shouldn’t everyone who wants to exercise do it how and where the exercise patent owners let them? Why shouldn’t everyone who wants to grow plants have to grow what the seed patent owners sell? After all, they give us better exercises and better plants. All we have to give up for this greater efficiency is some flexibility, some control, and some freedom. The patenting of plants means that we need permission from corporations to grow things. Those of us who do things in the approved ways–say, grow a garden from purchased seeds–probably will not have any trouble. But those who do something eccentric, like breed orchids (let alone follow Mendel and cross peas) could be liable. Perhaps exceptions would be allowed; perhaps for rose breeding, an amateur tradition. But it is this forced amateur status that galls: The corporate owners of the world’s genetic resources condescending to allow us to putter around with nature in approved ways, or with unpatented plants inferior to what we can buy at the store, like children allowed to play with worn-out or obsolete adult clothes and tools.

Clash of philosophies

Large philosophies of life are at odds here. Think of it as the clash between the virtues of democratic competition and the virtues of technocratic monopoly. Patents are always a suppression of freedom and competition in favor of monopoly, which we accept as a special and narrow exception to our democratic presumption that competition is better. Critics of corporate control of the food supply see such an expanded suppression of competition as shutting down alternative ways to live, to eat, and to relate to the natural world and also to the social world. In this view, the diversity of cultures and lives is at stake. These critics believe that the rough-and-tumble competition among many different perspectives, like that among species, is what drives innovation, adaptation, and progress; the source of future creativity and value and our protection against disease, shocks, mistakes, or whatever comes tomorrow.

The deep intuition that lies beneath the biotech critics’ passionate rejection of patents on life is that the balance between freedom and control that is patent policy is the same balance that democracy itself depends on. No one can enter the competition of life without resources, and a thriving public domain is where most of us get them, as well as being the arena in which competition takes place. Democracy, freedom, and competition come as a package, and its price is some inefficiency and duplication of effort. On the other hand, a dream of technocratic efficiency tempts us with the promise that we could raise the standard of living overall by just accepting a little more uniformity. But the danger of letting a special and narrow exception to our freedom become the norm, justified by its claim to benefit the most people, is called “the tyranny of the majority,” and it is a perpetual threat to democracy. When patents are carefully bounded and apply only to new things–the essence of the patent idea–then there is no threat of tyranny. But when the patent system recognizes no bounds, and the world’s food crops are considered fair game for monopoly control, then there is a threat.

I believe that many, though not all, critics of biotechnology see this debate as absolutely the most crucial event of our era, with democracy and more at stake. This is the reason for the depth of their outrage and for the slogan “No patents on life.” The critics’ question is: Who will control the world’s food supply? Will it be many individuals (mostly farmers) democratically competing, or corporate monopolists backed by the state power of patents? And implicit in that question is the question of whether humanity’s long experiment with democracy is over.

The current simplistic biotech debate has stalled serious exploration of a wide range of important issues. Moving ahead will require interdisciplinary work equal in complexity to the topic, not solo efforts by select groups of scientists. And it will require of scientists a respect for other disciplines that has so far been lacking in the biotech debate. When the many points of view complement one another in a complete picture of biotechnology, its costs as well as its benefits, then the biotech debate can begin in earnest–as a debate, a conversation, a productive exchange and evaluation of views. Perhaps then we will start to move beyond the present angry impasse.

Forum – Winter 2004

Forensic science, no consensus

In “A House with No Foundation” (Issues, Fall 2003), Michael Risinger and Michael Saks raise what they perceive to be serious questions regarding the reliability of forensic science research conducted by law enforcement organizations and, in particular, by the Federal Bureau of Investigation (FBI) Laboratory. They make sweeping unsupported statements about scientists’ bias, manufactured data, and overstated interpretations. Although it is not possible to address each of these ill-founded remarks in this brief response, it is apparent that the authors are unaware of, or at least misinformed about, the FBI Laboratory R&D programs, which support an environment contradictory to that portrayed by the authors. I appreciate this opportunity to inform your readers about the strong foundation that forensic science research provides for the scientific community.

The FBI Laboratory’s Counterterrorism and Forensic Science Research Unit (CTFSRU) is responsible for the R&D within the FBI Laboratory and provides technical leadership in counterterrorism and forensic science for federal, state, and local laboratories. The CTFSRU focuses its research activities on the development and delivery of new technologies and methodologies that advance forensic science and fight terrorism. Our R&D efforts range from fundamental studies of microbial genomes to the development of an

In 2003, the CTFSRU has 115 active R&D projects, with a budget of $29 million. Our CTFSRU research staff is composed of 15 Ph.D.-level and 7 M.S./B.S.-level permanent staff scientists, supported by approximately 30 visiting scientists consisting of academic faculty, postdoctoral, graduate, and undergraduate students from accredited universities. In addition, to leverage our R&D activities and allow state and local laboratories to participate directly in FBI research efforts, the FBI Laboratory’s Research Partnership Program was initiated in 2002. Since its inception, partnership opportunities have expanded beyond research collaborations to include creation and maintenance of databases, method development, testing and validation, and technology assessment and transfer. Today, scientists from approximately 40 laboratories, including state, local, federal, and international laboratories, are research partners. The results of these completed research projects are published in peer-reviewed scientific journals, findings are presented at scientific meetings, and advanced technical training is provided to the forensic community via formal classes and training symposia.

The FBI Laboratory R&D collaborations involve prestigious researchers from academia, national laboratories, and private industry, as well as forensic science laboratories worldwide. With this diversity of input and review of our research projects, it is difficult to comprehend how one could perceive that research scientists are biased simply because the law enforcement community is providing funding for their research.

A more in-depth discussion regarding the specific issues that have been raised in this and other articles related to the forensic sciences will be published in the April 2004 issue of Forensic Science Communications ().

DWIGHT E. ADAMS

Director

FBI Laboratory

Quantico, Virginia


In their fine article, Michael Risinger and Michael Saks present an all-too-accurate critique of the so-called forensic identification sciences. Some, they argue, are so deeply entrenched in the litigation system that they have been able to escape scrutiny for decades despite their having no scientific foundation at all. Experts simply testify based on their experience, and courts accept their opinions without any serious inquiry into such matters as validity, reliability, and observer bias. In some instances, testing of practitioners and methodologies has begun, but the research designs and analysis are so biased as to make it of little scientific value.

There are some exceptions, however, which Risinger and Saks do not discuss. Examining areas in which forensically oriented research is being conducted with scientific rigor can help us to identify the conditions under which such research can more generally be made to comply with the ordinary standards of scientific inquiry.

Ironically, a good example of the development of sound forensic science has grown out of the legal community’s premature acceptance of some questionable methods. In the 1960s, a scientist from Bell Labs, which had earlier developed the sound spectrograph, claimed that human voices, like fingerprints, are unique and that spectrograms (popularly called “voiceprints”) could be used to identify people with great accuracy. Studies published in the 1970s showed low error rates, and judges began to permit voiceprint examiners (often people who had received brief training in police labs) to testify as experts in court.

Unlike the forensic sciences on which Risinger and Saks focus, however, acoustic engineering, phonetics, and linguistics are robust fields that exist independently of the forensic setting. Prominent phoneticians and engineers spoke up against using voiceprints in court. Then, in 1979, a committee of the National Research Council issued a report finding that voiceprints had not been shown to be sufficiently accurate in forensic situations, where signals are often distorted or degraded. Although some courts continue to admit voiceprint analysis, its use in the courtroom has declined.

At the same time, however, new research into automatic speaker recognition technology has been making significant progress. Laboratories around the world compete annually in an evaluation sponsored by the National Institute of Standards and Technology, which is intended to simulate real-life situations (see ). Rates of both misses and false alarms are plotted for each team, and the results of the evaluation are published. How can this happen even with government-sponsored research? First, the researchers are trained in fields that exist independently of courtroom application. Second, reliable methods are established in advance and are adhered to openly and on a worldwide basis.

Not every forensic field has both the will and the scientific prowess to engage in such serious research. But at least it appears to be possible when the conditions are right.

LAWRENCE M. SOLAN

Professor of Law

Director, Center for the Study of Law,

Language and Cognition

Brooklyn Law School

Brooklyn, New York

PETER M. TIERSMA

Professor of Law

Joseph Scott Fellow

Loyola Law School

Los Angeles, California

Lawrence M. Solan and Peter M. Tiersma are the authors of the forthcoming book Language on Trial: Linguistics and the Criminal Law.


Michael Risinger and Michael Saks make several unfortunate remarks about the forensic sciences in general and about me and a colleague in particular. But we do agree on one thing at least.

First, Risinger and Saks state that “(m)any of the forensic techniques used in courtroom proceedings . . . rest on a foundation of very weak science, and virtually no rigorous research to strengthen this foundation is being done.” They list “hair analysis” as one of the weakly founded techniques in need of more research. By this phrase, I take it that the authors mean “microscopical hair comparisons,” because they discuss it as such later in their paper, but “hair analysis” is often used to describe the chemical analysis of hairs for drugs or toxins–a very different technique. The forensic comparison of human hairs is based on histology, anthropology, anatomy, dermatology, and, of course, microscopy. An extensive literature exists for the forensic hair examiner to rely on, and the method appears in textbooks, reference books, and (most importantly) peer-reviewed journal articles. Although DNA has been held up by many as the model for all forensic sciences to aspire to, not every forensic science can or should be assessed with the DNA template. Forensic hair comparisons do not lend themselves to statistical interpretation as DNA does, and therefore other approaches, such as mitochondrial DNA analysis, should be used to complement the information derived from them. A lack of statistics does not invalidate microscopical hair comparisons; a hammer is a wonderful tool for a nail but a lousy one for a screw, and yet the screw is still useful given a proper tool.

Second, the authors use a publication I coauthored with Bruce Budowle on the correlation of microscopical and mitochondrial analyses of human hairs as an example of exclusivity in research (they describe it as an example of a “friends-only regime”). I worked at the FBI Laboratory, as did Budowle, during the compilation of the data found in that paper, and the paper was also submitted for publication while I worked there. Subsequently, I left the FBI and took my current position; the paper was published after that. Regardless of my employer, I have routinely conducted research and published with “outside” colleagues; Budowle certainly has. The data from this research were presented at the American Academy of Forensic Sciences annual meeting and appeared in the peer-reviewed Journal of Forensic Sciences. How our paper constitutes a “friends-only regime” is beyond me.

Third, Budowle and I are accused of having “buried” a critical result “in a single paragraph in the middle of the paper.” This result, where only 9 hairs, out of 80 that were positively associated by microscopical comparisons, were excluded by mitochondrial DNA, appeared in the abstract, Table 2 (the central data of the paper), and the “Results” and “Discussion” sections. Furthermore, these 9 hairs were detailed as to their characteristics (ancestry, color, body location, etc.) in Tables 3 and 4, as well as in the corresponding discussion. “Buried,” I feel, is not an accurate description.

Finally, Risinger and Saks suggest that my coauthor and I equate the value of the two methods, placing neither one above the other. As far as it goes, that much is true: Microscopical and mitochondrial DNA analyses of human hairs yield very different but complementary results, and one method should not be seen as “screening for” or “confirming” the other. As an example, examining manufactured fibers by polarized light microscopy (PLM) and infrared spectroscopy (IR) yields different but complementary results, and the two methods are routinely applied in tandem for a comprehensive analysis. PLM cannot distinguish among subspecies of polymers, but IR provides no information on optical properties, such as birefringence, that help to exclude many otherwise similar fibers. In the same way, microscopy and mitochondrial DNA methods provide more information about hairs together than separately. To quote from our paper, “(t)he mtDNA sequences provide information about the genotype of the source individual, while the microscopic examination evaluates physical characteristics of an individual’s hair in his/her environment (phenotype).” In our paper, Budowle and I concur with other researchers in the field that “there will be little if any reduction in the level of microscopic examination as it will be both necessary and desirable to eliminate as many questioned hairs as possible and concentrate mtDNA analysis on only key hairs.”

This does not, however, mean that we feel, as Risinger and Saks intimate, that “all techniques are equal, and no study should have any bearing on our evaluation of future cases in court.” The sample in this one study was not representative in a statistical or demographic sense; the study was a review of cases submitted within a certain time frame that had been analyzed by both methods in question. Had Budowle and I extrapolated the results of this one study to the population at large or to all forensic hair examiners, we would have been outside the bounds of acceptable science. I’m sure the authors would have taken us to task for that, as well.

I strongly agree with Risinger and Saks’ statement that “any efforts that bring more independent researchers . . . into the forensic knowledge-testing process should be encouraged” and their call for more independent funding of forensic science research. More independent forensic researchers lead to more demands for funding from public and private agencies interested in improving science, such as the National Science Foundation and the National Institutes of Health. Forensic science has a history of treatment as an “also-ran” science. That perception needs to change for the betterment of the discipline and, more importantly, the betterment of the justice system. It can’t happen without additional money, and it’s time that the science applied to our justice system was funded as if it mattered.

MAX M. HOUCK

Director

Forensic Science Initiative

West Virginia University

Morgantown, West Virginia


In defense of crime labs

In “Crime Labs Need Improvement” (Issues, Fall 2003), Paul C. Giannelli makes a few valid points about the need to improve the capacity and capabilities of forensic science laboratories in the United States. He makes several invalid points too.

Giannelli states that, “the forensics profession lacks a truly scientific culture–one with sufficient written protocols and an empirical basis for the most basic procedures.” This claim is without merit. Gianelli cites as proof significant problems in West Virginia and the FBI Crime Laboratory that occurred years ago. A few aged examples should not be used to draw the conclusion that “the quality of the labs is criminal.” One need only walk through an accredited crime laboratory, such as the Los Angeles Police Department (LAPD) Crime Laboratory, to observe that forensic science is the embodiment of scientific culture. Since 1989, at least 10 scientific working groups have been formed in the LAPD Crime Laboratory, each with a specific scientific focus such as DNA, firearms, or controlled substances. These groups are federally sponsored and have broad representation from the forensic science community. They have worked to establish sound scientific guidelines for analytical methods, requirements for scientist training and education, and laboratory quality-assurance standards. The LAPD Crime Laboratory has not only accepted these scientific guidelines but has participated in their development.

Giannelli claims that accreditation rates for crime laboratories and certification rates for scientists are too low. He cites a lack of funding as contributing to these low rates and to a “staggering backlog of cases.” A review of the situation in California in a recent publication, Under the Microscope, California Attorney General Bill Lockyer’s Task Force Report on Forensic Services (August 2003), provides a different view of the state of forensic crime laboratories. Some of its findings: 26 of California’s 33 public crime laboratories are accredited by the American Society of Crime Laboratory Directors-Laboratory Accreditation Board. The seven nonaccredited labs all intend to apply for accreditation in the near future. The LAPD Crime Laboratory was accredited in 1998.

Certification of criminalists, questioned document examiners, latent fingerprint specialists, and other forensic specialists is not mandatory in California. Many have voluntarily undergone examination and certification by the American Board of Criminalistics, the International Association for Identification, and other certification boards, but most have not. Most forensic specialists do work in accredited laboratories that follow established standards that include annual proficiency testing of staff.

More than 450,000 casework requests are completed in California crime laboratories each year. A relatively low number of requests, 18,000, are backlogged. Of the backlogged requests, most require labor-intensive services such as analysis of firearms, trace evidence, fire debris, latent fingerprints, and DNA.

To reduce backlogs and improve analysis turnaround times, the state and local agencies need to increase permanent staffing levels.

Giannelli’s claim that more funding is needed to reduce backlogs and improve analysis turnaround times is valid and welcome. We also agree with his urging that the nation’s crime laboratories must be accredited and examiners certified. But his questioning of whether forensic science is “truly a scientific endeavor” is clearly invalid.

WILLIAM J. BRATTON

Chief of Police

Los Angeles, California


Polygraph fails test

For several years now, the widespread use of the polygraph as a screening tool in our country’s national laboratories has been a concern of mine, and I am glad that David L. Faigman, Stephen E. Fienberg, and Paul C. Stern have raised the issue (“The Limits of the Polygraph,” Issues, Fall 2003). Although the polygraph is not completely without merit in investigating specific incidents, I have yet to see any scientific evidence that this method can reliably detect persons who are trained to counter such techniques, or that it can deter malicious behavior. In fact, based on my relationship with the national laboratories in the state of New Mexico, I am certain that these tests have the effect of reducing morale and creating a false sense of security in those who are responsible for safeguarding our nation’s greatest secrets.

There are now three studies spanning 50 years that validate my concerns. In 1952, the Atomic Energy Commission (AEC) created a five-person panel of polygraph-friendly scientists to review the tool’s merit in a screening program, and in the following year the AEC issued a statement withdrawing the program as a result of congressional inquiry and serious concerns expressed by these scientists. In 1983, the Office of Technology Assessment concluded that, “the available research evidence does not establish the scientific validity of the polygraph test for personnel security screening.” And in 2003, a comprehensive study by the National Research Council of the National Academy of Sciences concluded essentially the same thing, even using accuracy levels well above the known state of the art.

Although I am encouraged that the Department of Energy is reducing its use of the polygraph in screening its employees, approximately 5,000 people every five years, with no record of misbehavior at all, will still be subjected to this test. I believe that polygraph use should be limited to highly directed investigations on a one-on-one basis, combined with other data about an individual. I am also troubled by the Department of Defense’s recently authorized expansion of its use of polygraphs. From all accounts, neither the technology nor our understanding of it has advanced much–if at all–in half a century, which leaves its results a highly dubious indicator at best, and a boon to our nation’s enemies at worst. In using polygraphs as an open screening tool, I believe that the Department of Defense is making the same mistake that the Department of Energy is now trying to correct.

SEN. JEFF BINGAMAN

Democrat of New Mexico


The fingerprint controversy

Many of the issues in Jennifer L. Mnookin”s “Fingerprints: Not a Gold Standard” (Issues, Fall 2003) have been discussed at great length during the past few years by many professionals in both the legal and forensic communities. Her contention that friction ridge identification has “not been sufficiently tested according to the tenets of science” raises this question: To what degree should these undefined “tenets of science” determine the evidentiary value of fingerprints?

In Daubert v. Merrell Dow Pharmaceuticals, the U.S. Supreme Court made general observations (more commonly referred to as “Daubert criteria”) that it deemed appropriate to assist trial judges in deciding whether “the reasoning or methodology underlying the testimony is scientifically valid.” The court also stated, “The inquiry envisioned by Rule 702 is, we emphasize, a flexible one. Its overarching subject is the scientific validity–and thus the evidentiary relevance and reliability–of the principles that underlie a proposed submission. The focus, of course, must be solely on principles and methodology, not on the conclusions they generate.”

Many years of scientific testing and validation of fingerprint uniqueness and permanency–the primary premises of friction ridge identification–were significant in a British court’s decision in 1902 to allow fingerprint evidence. Since that time, independent research, articles, and books have documented the extensive genetic, biological, and random environmental occurrences that take place during fetal growth to support the premises that friction ridge skin is unique and permanent. The Automated Fingerprint Identification System (AFIS) today makes it possible to electronically search and compare millions of fingerprints daily. To the best of my knowledge, fingerprint comparisons using AFIS systems worldwide have never revealed a single case of two fingerprints from two different sources having identical friction ridge detail in whole or in part. These findings attest to the uniqueness of fingerprints. Although this cannot be construed as “true” scientific research, it should not be discounted, in my opinion, when evaluating the probative value of fingerprint evidence. Mnookin also acknowledges that, “fingerprint identification . . . may also be more probative than other forms of expert evidence that continue to be routinely permitted, such as physician’s diagnostic testimony, psychological evidence, and other forms of forensic evidence.”

Although I support further scientific research to determine statistically “how likely it is that any two people might share a given number of fingerprint characteristics,” one must be extremely careful when bringing a statistical model into the courtroom. Misinterpretations of DNA statistical probability rates have been reported.

I believe that the identification philosophy and scientific methodology together create a solid foundation for a reliable and scientifically valid friction ridge identification process. S/Sgt. David Ashbaugh of the Royal Canadian Mounted Police describes the identification philosophy as follows: “An identification is established through the agreement of friction ridge formations, in sequence, having sufficient [observed] uniqueness to individualize.” The scientific methodology involves the analysis, comparison, and evaluation of the unknown and known friction ridge impressions for each case by at least two different friction ridge identification specialists. This approach to friction ridge identification follows an extremely uniform, logical, and meticulous process.

As long as properly trained and competent friction ridge identification specialists correctly apply the scientific methodology, the errors will be minimal, if any. The primary issue here, I believe, is training. Is the friction ridge identification specialist properly trained and does he/she have the knowledge and experience to determine that a small, distorted, friction ridge impression came from the same source as the known exemplar? This issue can certainly be addressed on a case-by-case basis, once fingerprint evidence is deemed admissible by the presiding judge, by qualifying the expertise of the friction ridge identification specialist before any fingerprint evidence is given.

Shortcomings do exist in some areas of friction ridge identification, and they should be addressed: specifically, the need for standardized training and continuing research. The evidentiary value, however, of friction ridge identification is significant and should not be excluded from the courtroom on the basis that it does not fit into one individual’s interpretation of the “tenets of science.”

MARY BEETON

1st Vice President

Canadian Identification Society

Orangeville, Ontario, Canada

www.cis-sci.ca


I was gratified to read Jennifer Mnookin’s superb article because it shows that yet another eminent scholar agrees with what I have been arguing for some years now: that courts admitted fingerprint evidence nearly a century ago without demanding evidence that fingerprint examiners could do what they claimed to be able to do. What’s more alarming, however, is that courts are now poised to do exactly the same thing again.

Since 1999, courts have been revisiting the question of the validity of forensic fingerprint identification. But no court has yet managed to muster a rational argument in defense of fingerprint identification. Instead, courts have emphasized the uniqueness of all human fingerprints (which is irrelevant to the question of with what degree of accuracy fingerprint examiners can correctly attribute latent prints) and, as Mnookin points out, “adversarial testing.” But any scientist can easily see that a criminal trial is not a scientific experiment.

Additionally, most scientists will probably be shocked to learn that fingerprint examiners claim that the error rate for fingerprint identification can be parsed into “methodological” and “human” error rates and that the former is said to be zero, despite the fact that known cases of misidentification have been exposed. (Of course, there is no concept of a “methodological error rate” in any area of science other than forensic fingerprint identification. Try typing it into Google.) It’s like saying the airplanes have a zero “theoretical” crash rate. But courts have accepted the zero “methodological error rate” and, by fiat, declared the “human error rate” to be “vanishingly small,” “essentially zero,” or “negligible.”

Such arguments, which would be laughed out of any scientific forum, have found a receptive audience in our courts. As a result of this, fingerprint examiners have answered demands for scientific validation by invoking the fact that courts accept fingerprint identification. Since fingerprint examiners do not answer to any academic scientific community, legal opinions have come to substitute for the scientific validation that Mnookin, like virtually every disinterested scholar who has examined the evidence, agrees is lacking. Legal scholars, psychologists, and scientists–the weight of scholarly opinion is clear. The lone holdouts are the ones that matter most: the courts, which are inching ever closer to declaring the issue closed and to treating questions about the validity of fingerprint identification as absurd on their face.

Science has encountered a roadblock in the courts. But the scientific community, which has remained largely silent on the issue, can break the impasse. Fingerprint identification may or may not be reliable, but in either case “adversarial testing” and “zero methodological error rates” are not good science. Will the scientific community allow these notions to enjoy the imprimatur of our courts, or will it demand that scientists and technicians (whichever fingerprint examiners are) provide real evidence to support their claims? This is one issue where the scientific community needs to serve as the court of last resort.

SIMON A. COLE

Assistant Professor of Criminology, Law,

and Society

University of California, Irvine

Simon A. Cole is the author of Suspect Identities: A History of Fingerprinting and Criminal Identification (Harvard University Press, 2001).


For most of us, fingerprints do indeed conjure up the image of a gold standard in personal identification. In her provocative article, Jennifer Mnookin questions that view and raises a number of issues about the scientific evidence supporting the admissibility of fingerprints in criminal proceedings. I will address only one of these issues: the historical question she asks about why fingerprinting was accepted so rapidly and with so little skepticism. The answer is simple: Fingerprints were adopted initially because they enjoyed a very strong scientific foundation attesting to their efficiency and accuracy in personal identification. With widespread adoption and successful use, this early scientific foundation receded from view, to the point where it has been all but invisible, including in several recent and otherwise scholarly books on the subject.

The scientific foundation of fingerprints as a means of accurate personal identification dates from the work of Francis Galton, particularly in three books he published in 1892-1895. In his 1892 book Fingerprints, he presented evidence for their permanence (through the examination of several series of prints taken over a long span of time) and developed a filing scheme that permitted storage and rapid sifting through a large number of prints. The filing scheme, as further developed by E. R. Henry in 1900, spread to police bureaus around the world and made the widespread use of the system feasible. That portion of Galton’s work is reasonably well known today. But more to the point of present concerns, Galton also gave a detailed quantitative demonstration of the essential uniqueness of an individual’s fingerprints; a demonstration that remains statistically sound by today’s standards.

The quantification of the rarity of fingerprint patterns is much more difficult than is the similar study of DNA patterns. Strands of DNA can be taken apart, and the pieces to be subjected to forensic study may be treated as nearly statistically independent, using principles of population genetics to allow for the known slight dependencies and for differences in pattern among population subgroups. The (nearly) independent pieces of evidence, no one of which has overwhelming force, may then be combined to produce quantitative measures of uniqueness that can, when carefully presented, overwhelm any reasonable doubt.

Fingerprints present a different set of challenges. They have one advantage over DNA: the fine details of fingerprints are developmental characteristics, and even identical twins sharing the same DNA have distinguishably different fingerprints (as Galton in fact showed). But a fingerprint exhibits widespread correlation over the parts of its pattern and is not so easily disaggregated into elementary components, as is a DNA strand. How then did Galton overcome this obstacle?

In his 1892 book, Galton presented an ingenious argument that disaggregated a full fingerprint into 24 regions and then gave for each region a conservative assessment of the conditional rarity of the regional pattern, taking account of all detailed structure outside of that region. This conditional assessment successfully swept aside worries about the interrelations across the whole of the pattern and led him to assert a conservative bound on a match probability: Given a full and well-registered fingerprint, the probability that another randomly selected fingerprint would match it in all minutia was less than 1 in 236 (about 1 in 69 billion, although Galton’s calculation was not equal to his ingenuity, and he stated this was 1 in 64 billion). If two prints were matched, this would be squared, and so forth. I give a more detailed account of Galton’s investigation in a chapter of my book Statistics on the Table (Harvard University Press, 1999).

If Galton’s investigation withstands modern scrutiny, and it does, why should we not accept it today? There are two basic reasons why we might not. First, he assumed that the print in question was a full and accurately registered print, and he made no allowance for partial or blurred prints, the use that produces most current debate. Galton himself would accept such prints, but cautiously, and subject to careful (but unspecified) deliberation by the court. And second, his quantitative assessment of the rarity of regions was made on the basis of narrow empirical experience; namely, his own experimentation. He proceeded carefully and his results deserve our respect, but we require more today.

Galton’s investigation was crucial to the British adoption of fingerprints, and his figure of 1 in 64 billion was quoted frequently in the decades after his book, as fingerprints gained widespread acceptance. But as time passed and success (and no significant embarrassments) accumulated, this argument receded from view. Nonetheless, for most of the first century of their use, fingerprints enjoyed a scientific foundation that exceeded that of any other method of forensic identification of individuals. Although more study would surely be beneficial for its modern use, we should not wonder about the initial acceptance of this important forensic tool.

STEPHEN M. STIGLER

Department of Statistics

University of Chicago

Chicago, Illinois


In addition to the problems with fingerprint identification discussed by Jennifer Mnookin, a new source of potential error is being introduced by the expanded use of digital technology. The availability of inexpensive digital equipment such as digital cameras, printers, and personal computers is providing even the smallest police departments with their own digital forensic laboratories. Although this would appear to be a sign of progress, in practice it is introducing a new realm of errors through misuse and misunderstanding of the technology. Until police receive better training and use higher-quality equipment, the use of digital technology should be challenged in court, if not eliminated.

Consider what happens in fingerprinting. Crime investigations often involve the recovery of latent fingerprints, the typically small, distorted, and smudged fragments of a fingerprint found at crime scenes. Police often use digital cameras to photograph these fingerprints, instead of traditional cameras with 35-mm forensic film. Although the digital cameras are convenient for providing images for computer storage, they are not nearly as accurate as the analog cameras they are replacing. When shooting in color, a two-megapixel digital camera produces an image with only 800 dots per inch, whereas a 35-mm camera provides 4,000-plus dots per inch. As a result, critical details including pores and bifurcations can be lost. Colors are similarly consolidated and very light dots are lost.

Distortion occurs whenever digital images are displayed. Every monitor and every printer displays color differently. The fact that the resulting image looks crisp and clear does not mean that it is accurate.

Once a latent fingerprint is entered into a computer, commercial software is often used to enhance its quality. Adobe Photoshop, an image-creation and editing product, can be combined with other software products to improve the latent image by removing patterns such as the background printing on a check, the dot pattern on newsprint, or the weave pattern on material that blurs the image of a fingerprint. The software can then be used to enhance the image of the remaining fingerprint.

The problems do not end with the image itself. Once information is computerized, it needs to be protected from hackers. Many police departments allow their computer vendors to install remote access software such as PC Anywhere to facilitate maintenance. Unfortunately, this also makes the computer particularly vulnerable to unauthorized attack and alteration of the computer’s files. Most police departments have no rapid means of determining whether their digital information was modified.

By the time such a digital fingerprint image reaches a courtroom, there is no easy way of verifying it. Even the police department’s own fingerprint examiners who take the stand may not realize that they are working with a digital picture on printer paper and not an original photograph of a fingerprint. Of the 40-plus Daubert challenges to fingerprints in court, none have been based on the inaccuracy or loss of detail associated with the use of digital technology or the possibility of unauthorized manipulation of computer images. In most instances, the defense attorney is not aware that the fingerprint image is digital. Indeed, how could the defense know it is dealing with a digital image if the fingerprint examiner does not?

It might be futile to forbid the police to use digital technology in their work, but it is clear that before this technology can be used successfully, we must develop rigorous standards for the quality and reliability of digital images, extensive training for police personnel, and improved computer security.

MICHAEL CHERRY

Woodcliff Lake, New Jersey

LARRY MEYER

Towanda, Illinois


Radiological terrorism

“Securing U.S. Radioactive Sources” by Charles D. Ferguson and Joel O. Lubenau (Issues, Fall 2003) identifies noteworthy issues concerning the potential malevolent use of high-risk radioactive material. These issues are not new, however. The Nuclear Regulatory Commission (NRC), its Agreement States, and our licensees have taken, or intend to take, measures beyond those mentioned in the article to address these matters–measures that I believe ensure the continued safe and secure use of radioactive material. For this reason, I would like to provide a summary of the progress made to date in ensuring the security of high-risk radioactive material. I also will discuss Ferguson and Lubenau’s recommendation that NRC advocate alternative technologies.

The U.S. government has responded effectively and in a coordinated manner to address the potential for radioactive material to be used in a radiological dispersal device (RDD). The NRC has worked with other federal agencies; federal, state, and local law enforcement officials; NRC Agreement States; and licensees to develop security measures that ensure the safe and secure use and transport of radioactive materials. In addition to the measures taken immediately after the events of September 11, 2001, we have, in cooperation with other federal agencies and the International Atomic Energy Agency (IAEA), established risk-informed thresholds for a limited number of radionuclides of concern that establish the basis for a graded application of any additional security measures. This approach ensures that security requirements are commensurate with the level of health risk posed by the radioactive material. Using these thresholds, the NRC has imposed additional security measures on licensees who possess the largest quantities of certain radionuclides of concern and will address other high-risk material in the near future. At the international level, we have worked closely with the IAEA to develop the “Code of Conduct on the Safety and Security of Radioactive Sources,” which will help ensure that other countries also will work to improve the global safety and security of radioactive materials. Taken together, these national and international cooperative efforts have achieved measurable progress in ensuring adequate protection of public health and safety from the potential malevolent use of high-risk radioactive material.

Concerning the advocacy of alternative technologies that was raised in the Ferguson/Lubenau article, I would note that the NRC does not have regulatory authority to evaluate non-nuclear technologies or, as a general matter, to require applicants to propose and evaluate alternatives to radioactive material. Moreover, the evaluation may be significantly outside the scope of existing NRC expertise, given that the evaluation would need to consider, on a case-specific basis, not only the relative risks of the various non-nuclear technologies that could be applied, but also the potential benefits, including consideration of the societal benefits of using each technology.

NILS J. DIAZ

Chairman, Nuclear Regulatory

Commission

Washington, D.C.


The issue of radiological terrorism is one of the most serious homeland security threats we face today. Dirty bombs, although nowhere near as devastating as nuclear bombs, can still cause massive damage, major health problems, and intense psychological harm. Incredibly, action by the Bush administration is making things worse. Two successful and inexpensive radiological security programs in the Department of Energy (DOE)–the Off-Site Source Recovery (OSR) Project and the Nuclear Materials Stewardship Program (NMSP)–are under attack and may be terminated within a year.

There are more than two million radioactive sources in the United States, which are used for everything from research to medical treatment to industry. The Nuclear Regulatory Commission has admitted that of the 1,700 such sources that have been reported lost or stolen over the past five years, more than half are still missing. There is also strong evidence that Al Qaeda is actively seeking radioactive materials within North America for a dirty bomb (see “Al Qaeda pursued a ‘dirty bomb,'” Washington Times, October 17, 2003). I have been working hard to improve the security of nuclear material and reduce the threat of terrorist dirty bombs, both by introducing the Dirty Bomb Prevention Act of 2003 (H.R. 891) and by pursuing vigorous oversight of DOE’s radiological source security programs.

In their excellent article, Charles Ferguson and Joel O. Lubenau point to the OSR Project as an example of a successful federal effort to improve radioactive source security. Despite having retrieved and secured nearly 8,000 unwanted and unneeded radioactive sources from hospitals and universities between 1997 and 2003, the program’s funding and DOE management support will be in jeopardy as of April 2004. Even more troubling is the case of the NMSP, established five years ago to help DOE sites inventory and dispose of surplus radioactive sources. At a cost of only $9 million, the program has recovered surplus plutonium, uranium, thorium, cesium, strontium, and cobalt. By collecting and storing these sources in a single secure facility, the NMSP increased safety and saved $2.6 million in fiscal year 2002. The NMSP is now prepared to assist other federal agencies, hospitals, universities, and other users of radioactive sources. However, in June 2002, DOE Assistant Secretary for Environmental Management Jessie Roberson announced that the NMSP should prepare to finish its activities and shut down in FY2003.

The Bush administration’s failure to energetically support these programs is particularly appalling in light of the May 2003 Group of Eight (G8) summit in Evian, France. With U.S. encouragement, the G8 launched a major new international radiological security initiative involving many of the tasks performed domestically by the OSR Project and the NMSP. If we can’t support these programs at home, how can we expect radioactive responsibility from others?

REP. EDWARD J. MARKEY

Democrat of Massachusetts


Charles D. Ferguson and Joel O. Lubenau provide an excellent review of the problems with the current laws and regulations governing the acquisition and disposal of radiological sources. We’d like to add two more urgent issues to the list.

First, the Group of Eight (G8) leaders detailed a new initiative last May to improve the security of radioactive sources in order to reduce the threat of dirty bombs. This important initiative includes efforts to track sources and recover orphaned sources, improve export controls, and ensure the safe disposal of spent sources. Given this unanimous guidance from the G8, it is astonishing that the U.S. Department of Energy (DOE) has recently moved in a completely opposite direction: The United States is canceling the Nuclear Materials Stewardship program (NMSP) and closing the Office of Source Recovery (OSR). These programs should be strengthened, not eliminated. The NMPS has just completed a multiyear program to catalog radioactive sources at U.S. national labs. It has also assisted in the recovery of unwanted radioactive sources, including plutonium, uranium, thorium, cesium, strontium, and cobalt, from these labs and is prepared to expand its efforts overseas. The OSR has recovered 8,000 unwanted radioactive sources from U.S. universities and hospitals and could recover at least 7,000 additional unwanted and unsecured sources. Neither of these programs is expensive to operate. They support the recommendations of the G8 and meet many of the needs detailed by Ferguson and Lubenau.

Second, the United States has no clear protocol for responding to the detonation of a radiological dispersal device (RDD). Given the public’s fear of anything related to radiation, it can’t be assumed that procedures that work well when a tanker of chlorine is in an accident will be effective in the case of a radiological incident. Clear guidelines for evacuation, sheltering in place, and post-event cleanup must be determined and disseminated before, rather than after, the detonation of an RDD. Preparatory training and materials to help public officials and the press communicate with the public during an incident are essential for ensuring that the public understands the risks and trusts local government and first responders. Radiation, dispersal patterns, and evacuation techniques are all well understood; what is needed is a clear plan of action. Furthermore, debates over the difficult issue of what is “clean” after an RDD event should be undertaken now, not after an incident. The protocol needs to reflect the fact that cleaning procedures to reduce cancer risks from radioactive particles bonded to concrete may not need to be as stringent as the Superfund and other applicable standards would suggest. Contaminated food and water supplies may present a far more urgent danger, and detailed plans should be in place for measuring the danger and establishing procedures before an incident takes place.

In short, Congress should act to fully fund DOE source control programs and to require that the Department of Homeland Security create response guidelines for an RDD.

HENRY KELLY

President

BENN TANNENBAUM

Senior Research Associate

Federation of American Scientists

Washington, D.C.

www.fas.org


Preventing forest fires

Policy related to dealing with fire on the public lands is certainly a hot topic (pun fully intended) in political circles. The development of a national fire policy and the pending passage of the Healthy Forest Restoration Act (H.R. 1904) are the two activities in this political season that are drawing the most attention. Unfortunately, debates surrounding these efforts in some circles have portrayed Republicans (in general) as trying to circumvent environmental laws by using growing concerns about “forest health” and fires “outside the range of historical variability” to speed up “treatments” (which involve cutting trees). Democrats (in general) have responded to environmentalists’ concerns about “shortcutting” public participation and appeals processes by opposing such actions.

The House of Representatives passed H.R. 1904 by a large margin. Things were blocked in the Senate until the dramatic wildfires in southern California in mid-October of 2003–hard on the heels of dramatic fire seasons in the Northwest in 2000, 2002, and 2003–broke the resistance, and political response to building public concern carried the day. In late October, the compromise legislation passed the Senate by a huge margin. As of this writing, the bill was in conference and considered certain to pass.

No matter what forms the general policy and the Healthy Forest Restoration Act take, it will be essential for natural resources management professionals to construct a more complete framework for action. At this point, “A Science-Based National Forest Fire Policy” by Jerry F. Franklin and James K. Agee (Issues, Fall 2003) can be considered fortuitous in terms of timing and prescient in terms of the need to guide discussion of and application to building a platform for the development of a science-based national wildfire policy.

If I were handed the task of leading an interagency group to take on this task, I would tell the administration that the Franklin/ Agee paper would be a most excellent place to start. They correctly note that although H.R. 1904 provides impetus by addressing procedural bottlenecks to action, it does not answer the inevitable questions about “the appropriate decisions about where, how, and why.” Answering those questions would go far toward toning down opposition based on the suspicion that such activities are merely a charade to cover the accelerated wholesale cutting of timber.

To their credit, Franklin and Agee provide a generalized blueprint for doing just that. The land management agencies that will be tasked with carrying out the underlying intent of the legislation would be well advised to consult that blueprint in developing both the underlying foundation and the detailed approaches.

JACK WARD THOMAS

Boone and Crockett Professor of

Conservation

University of Montana

Missoula, Montana

Jack Ward Thomas is Chief Emeritus of the U.S. Forest Service.


Jerry F. Franklin and James K. Agee make an invaluable contribution to the debate over developing ecologically and economically balanced fire management policies. For eons, fire has played an essential role in maintaining the natural processes dictating the function, integrity, and resiliency of many wildland ecosystems. However, decades of fire suppression and past management practices have interrupted fire regimes and other natural processes, thereby compromising natural systems. Meanwhile, drought and development patterns have complicated the responses of land management agencies to “wildfire.” The result is a dire need for effective ecological restoration and community fire protection.

Franklin and Agee describe one possible trail that can be followed in crafting fire management policies tailored to current physical and ecological realities throughout the West. They make a strong case for community protection and ecosystem restoration, as well as for the role of appropriate active vegetation management in achieving those goals. The key to success lies in tailoring that management to the land. Initial, though not exclusive, priority should be placed on community protection. Once significant progress has been made in this all-important zone, efforts can shift more toward ecological restoration where needed across the landscape. Active management should logically be more intensive in the community zone and less intrusive and intensive in the backcountry. In some places, a cessation of management may be enough to free an area to resume natural processes. In other areas, prescribed burning, wildland fire use, strategic thinning, and other forms of mechanical treatment will be both necessary and appropriate. The key in virtually every case will be doing something about ground, surface, and ladder fuels.

The reintroduction of fire can play a vital role in this endeavor. Restoring fire to the ecosystem is of elemental importance both to the ecological health of Western landscapes and to the safety of Western communities that are currently at risk from fire. And yet there are very real social and political obstacles to reintroducing fire to wildland landscapes, even if doing so will have ecological and social benefits, including a reduced risk of future catastrophic fire. The combined effect of these factors is today’s challenge : how to better manage and live with fire so that people and communities are safe, while ecosystems are allowed to benefit from annual seasons of flame. Franklin and Agee make one thing clear: It is time to break the cycle of inaction and get to work in the right places with the right tools.

JAY THOMAS WATSON

Director, Wildland Fire Program

The Wilderness Society

San Francisco, California


Oil and water

Nancy Rabalais’ “Oil in the Sea” (Issues, Fall 2003) makes the recommendation that “the EPA should continue its phase-out efforts directed at two-stroke engines.” However, by not differentiating between old and new technology, the article exaggerates the impact of recreational boating and fails to recognize the effect of current regulations on the engine population.

Since 1997, the Environmental Protection Agency (EPA) has regulated outboard marine engines and personal watercraft. Since the mid-1990s, marine engine manufacturers have been transitioning their product lines toward low-emission technologies, including four-stroke and direct-injection two-stroke engines. Although the EPA regulations are designed to reduce emissions to the air, they will clearly also result in reduced fuel emissions to water, as conventional crankcase scavenged two-stroke (or existing technology two-stroke) engines are replaced by four-stroke engines and direct injection two-stroke engines. In fact, the engine technology jump has produced engines that exceed the early expectations that were used to develop the EPA rule. California adopted a rule in 2000 accelerating the impact of the EPA national rule by five years. All outboard manufacturers had products in place meeting an 80 percent hydrocarbon emission reduction in 2001.

It should also be pointed out that the EPA does not typically promulgate rules that preclude certain technologies such as the two-stroke engine; in fact, direct injection two-stroke engines are considerably cleaner than existing technology two-stroke engines, and they meet or exceed EPA limits for 2006 and California limits for 2004. Current, new-technology, direct injection two-stroke engines reduce hydrocarbon emissions by as much as 80 percent from the emissions produced by conventional two-strokes that are characterized in “Oil in the Sea.” Marine engine manufacturers continue to invest in new technologies to reduce emissions. Marine engine users operate their products in water, and they all want clean water, whether for business or recreation.

SUE BUCHEGER

Manager, Engine Test Services

DAVID OUGHTON

Manager, Regulatory Compliance

Mercury Marine

Fond du Lac, Wisconsin


A humanities policy?

More money for the humanities? By all means! As Robert Frodeman, Carl Mitcham, and Roger Pielke, Jr. point out in “Humanities for Policy–and a Policy for the Humanities” (Issues, Fall 2003), there has been a shocking decline in the U.S. government’s commitment to the humanities in the United States during the past few decades, diminishing our ability to deal creatively with important social problems.

An obvious example is nuclear waste. Regardless of one’s views on the virtues or vices of nuclear power, current arrangements for waste storage are clearly inadequate. Yet local opposition to developing long-term storage facilities or even to transporting waste to a facility elsewhere is often fierce. Scientists and engineers characteristically dismiss this opposition as irrational, but regardless of its source, the opposition is as real as the waste that engenders it. If more humanists and social scientists had been involved in plans for waste disposal, some of these concerns might have been anticipated and addressed.

Moreover, the humanities don’t just offer perspective on the human dimensions of the waste disposal issue; they also offer insight into the science itself. After all, the science supporting the proposed repository at Yucca Mountain has been seriously criticized from within the scientific community, despite decades of work and billions of dollars spent. Could this money have been spent more effectively? Very likely. Historians who have studied large-scale science and engineering projects in diverse settings could have offered relevant insights and advice, both at the onset of the project and as difficulties arose along the way. Few scientists and engineers think of the humanities and social sciences as resources that could help them do their job better, but they should.

That said, a federal policy for the humanities is another story, for federal support is a two-edged sword. It is easy to congratulate ourselves on the way in which the U.S. government has managed the tremendous growth of federal science, but a closer look reveals a more complex and less gratifying picture. Although federal support was obviously good for physics (and more recently for molecular biology), other areas of science have not been so fortunate. A credible case can be made that significant areas have been starved of support, yet the question of how federal funding has affected U.S. science in both good and bad ways has scarcely even been posed.

Even more worrisome is the risk of deliberate politicization of research. Consider the recently released congressional report Politics and Science in the Bush Administration, compiled by the House Committee on Government Reform on behalf of Rep. Henry A. Waxman. This report “assesses the treatment of science and scientists by the Bush Administration [and] finds numerous instances where the Administration has manipulated the scientific process and distorted or suppressed scientific findings.” (Executive Summary, p. i). Waxman is a Democrat, but the report includes complaints from former Republican officials as well. If the sciences have been subject to intrusion and interference, consider the far greater vulnerability of disciplines whose topics are often explicitly political and whose methods and evidential standards are admittedly subjective and interpretive.

Policy for the humanities and humanities for policy? Perhaps, but there will be a price, and it might just be too high.

NAOMI ORESKES

Associate Professor of History

Director, Science Studies Program

University of California, San Diego


Confronting nuclear threats

Wolfgang K. H. Panofsky’s claim (“Nuclear Proliferation Risks, New and Old,” Issues, Summer 2003) that the United States “has failed to take constructive leadership” in countering the threat of nuclear terrorism is baffling, given this country’s record of counterproliferation initiatives. A couple of examples will suffice. The Nunn-Lugar Cooperative Threat Reduction and Nonproliferation Program, created by Congress in 1991, has been making significant progress in preventing the nuclear weapons in Russia and the former Soviet republics from being acquired by terrorists and “states of concern,” such as Iran. Since the tragic events of September 11, 2001, the United States was one of the first to recognize that, as the 2002 National Security Strategy puts it, “the gravest danger our Nation faces lies at the crossroads of radicalism and technology.” In May 2003, President Bush launched the Proliferation Security Initiative (PSI), which is expected to enable the United States and an 11-nation “coalition of the willing” to intercept transfers of weapons of mass destruction to states of concern. I believe Panofsky would agree that such preventive action against the proliferation of weapons, including nuclear materials, can be one of the first lines of defense against what he calls a nuclear catastrophe.

The real problem with the Bush administration’s approach to countering nuclear threats–both conventional and asymmetrical–has more to do with style than with actions, as Panofsky would have it. This administration has been steadfastly unwilling to rein in its global counterproliferation initiatives under the structures of international law and security. The U.S. strategy since September 11, 2001 places emphasis on freedom of individual action (unilateral, if necessary) rather than on the constructive constraints of international institutions, such as the United Nations (UN) and NATO. Notwithstanding public statements to the contrary, the U.S. actions, including its counterproliferation posture, largely conform to this strategic vision. The PSI is no exception: The United States did not consider institutionalizing the initiative within the UN Security Council or NATO, which would have given it vital legitimacy and increased its long-term effectiveness. Instead, the Bush administration preferred to retain PSI’s image as a U.S.-made counterproliferation hit squad.

The administration’s scornful view of international norms (during its first months in office) has now largely been replaced by a paternalistic one, according to which leadership through international organizations is an act of charity, not an expression of self-interest. This worldview is problematic for two reasons. First, it is unsustainable in the long run, and the “forced multilateralism” that has found expression in the U.S. approach to the North Korean crisis attests to this. Second, it decreases the effectiveness and capabilities of our global initiatives. Organizing PSI under the UN Security Council or a NATO mandate would give the initiative unique clout by demonstrating the acceptance of this counterproliferation approach by a wide variety of countries and not just by an American-made coalition of the willing. It could also prompt more of our allies to take an active counterproliferation approach. Channeling the U.S. counterproliferation efforts through multilateral structures can significantly increase the effectiveness of these initiatives and, in turn, make this country and the rest of the world more secure from nuclear threats, both old and new.

EUGENE B. KOGAN

Research Intern

Center for Nonproliferation Studies

Monterey Institute of International Studies

Washington, D.C.

From the Hill – Winter 2004

NIH facing new pressures; proposed roadmap in doubt

The House and Senate committees charged with overseeing the National Institutes of Health (NIH) held a joint hearing in October that highlighted both the management challenges facing the research agency and the political challenges facing members of Congress as they seek to address those issues.

The Senate Health, Education, Labor and Pensions Committee and House Energy and Commerce Committee scheduled the hearing two days after NIH Director Elias Zerhouni unveiled a new roadmap for biomedical research designed to maximize opportunities and bridge gaps unlikely to be addressed under NIH’s current decentralized structure. The plan proposed a series of new initiatives to encourage cross-disciplinary research involving multiple institutes, some of which the agency has already begun to implement. At the hearing, Zerhouni made it clear that some elements of the plan call for strengthening the office of the director by providing greater authority over the agency’s budget and a more centralized planning mechanism.

The hearing was held amid increasing calls for Congress to step up its oversight of the agency after doubling its budget to about $28 billion during the past six years. In addition to providing Zerhouni an opportunity to present his proposals, the hearing served as a warning that his planned roadmap could encounter some rugged terrain on Capitol Hill. Committee members are contemplating the first reauthorization of NIH since 1993, and a plethora of obstacles could get in the way, from partisan politics to contentious ethical issues, to the complex web of patient groups and research institutions with a stake in the outcome.

One such issue, which has recently garnered much attention, is an attempt by conservatives in the House to prevent NIH from funding certain studies that involve behavioral research relevant to drug abuse and HIV/AIDS transmission. Rep. Patrick Toomey (R-Penn.) proposed an amendment to the Labor-Health and Human Services appropriations bill in July that would have blocked funding for four such studies that had already been approved through NIH’s peer review process. The amendment failed by a vote of 212 to 210. Three of its backers raised the issue with Zerhouni at the hearing.

Further controversy ensued when an Energy and Commerce staff member provided NIH with a list of more than 200 grants that had been deemed questionable by the Traditional Values Coalition, a conservative advocacy group. NIH began contacting recipients of these grants, apparently to request information to help defend the research. But this prompted an outcry from scientific organizations, which expressed concern that such an action would undermine the peer review process and could deter researchers from pursuing projects similar to those targeted.

Rep. Mike Rogers (R-Mich.), who opposed the Toomey amendment, said at the hearing that his former career as an FBI agent had convinced him of the usefulness of research on sexual behavior, but he nevertheless cited a need for NIH to be more transparent.

Among the other contentious issues raised at the hearing were stem cell research, an outsourcing initiative that has rankled NIH staff, and allegations by Rep. Henry Waxman (D-Calif.) that the Bush administration has allowed ideology to interfere inappropriately with scientific panels at NIH.

Many members raised additional issues that are less explosive but nonetheless illustrate the complicated task facing lawmakers who will need to balance a wide array of parochial interests and overarching policy concerns as they craft a bill For example, Sen. Hillary Rodham Clinton (D-N.Y.) urged NIH to undertake comparative effectiveness studies of existing drugs; Rep. Stephanie Tubbs Jones (D-Ohio) focused on the need to address health disparities affecting minorities; and Sen. Edward M. Kennedy (D-Mass.) attacked the fiscal 2004 appropriations bill that awards NIH a much smaller increase than in past years.

Although Zerhouni was flanked at the hearing by two prominent scientists who support the roadmap–former NIH Director Harold Varmus, who now heads the Memorial Sloan-Kettering Cancer Center, and Harold Shapiro, who chaired an Institute of Medicine study of NIH’s organizational structure that was released in the summer of 2003–the plan may prove a nonstarter. The 27 individual institutes that comprise NIH face pressure from constituent groups to focus resources on their specific area of concern. This makes it difficult for institute directors to support the centralized research efforts proposed by the roadmap, which is what motivated Zerhouni to ask Congress for greater authority. Members of Congress, however, face pressure from these same groups, many of which argue that the current decentralized structure is working well. It remains to be seen whether Congress will go along with Zerhouni and pave the way for his proposed reforms.

For more information on the NIH roadmap, see www.nihroadmap.nih.gov.

Coordination of federal counterterrorism R&D examined

On September 30, the National Security Subcommittee of the House Government Reform Committee heard testimony from federal agencies and industry leaders regarding a little-known part of the government called the Technical Support Working Group (TSWG). The hearing focused on the history and practices of this interagency group, which invests in prototype technologies for the intelligence community and first responders in order to prevent terrorist attacks and minimize damage to citizens and infrastructure. The event also offered an opportunity to hear how the new kid on the block, the Department of Homeland Security (DHS), fits into the 20-year-old working group.

In his opening remarks, subcommittee chairman Christopher Shays (R-Conn.) noted that in the past, Congress has found a lack of coordination in federal counterterrorism R&D. Even testimony as recent as March 2000 described duplication of effort in the field of bioterrorism by DOD, DOE, and Department of Justice. Shays lamented, “Now, to that already crowded field, add the Department of Homeland Security, which Congress charged to act as both a developer and clearinghouse for innovative technologies.”

TSWG was created in 1986 as per a recommendation by a cabinet-level Task Force on Counterterrorism led by then-Vice President George H. W. Bush. In his testimony before the subcommittee, Michael Jakub, director of technical programs in the Office of the Coordinator for Counterterrorism at the State Department, noted that the Task Force found that U.S. counterterrorism activities were “uncoordinated and unfocused.” Thus, TSWG was established within an existing program chaired by the State Department called the Interdepartmental Group on Terrorism. The pre-TSWG Interdepartmental Group was created by National Security Decision Directive 30 in 1982 and given responsibility for developing overall U.S. policy on counterterrorism.

The goal of the 1986 Task Force was to create a mechanism for coordinating a national R&D program across relevant agencies that would reduce duplication of effort and easily identify gaps in research that needed to be tackled by the federal government. Although R&D funding continues to be primarily sponsored by DOD, TSWG has grown over the years to include active participation in more than 80 federal programs in 11 cabinet-level departments and 8 independent agencies. The State Department continues to chair the TSWG Executive Committee and provides policy oversight, whereas DOD executes and administers the program.

The R&D portfolio of TSWG is relatively small, $180 million in fiscal year 2003 spread among nine program elements: Chemical, Biological, Radiological and Nuclear Countermeasures; Explosives Detection; Improvised Device Defeat; Infrastructure Protection; Investigative Support and Forensics; Personnel Protection; Physical Security; Surveillance, Collection and Operations Support; and Tactical Operations Support. The majority of federal funds go to Chem-Bio Countermeasures (23 percent), Physical Security (16 percent), and Personnel Protection (13 percent).

DHS, recently invited to participate in TSWG, was given $75 million in its fiscal year 2004 appropriations to support research into prototypes of equipment that could be rapidly developed within the Homeland Security Advanced Research Projects Agency (HSARPA). This is a substantial increase from the $30 million initially requested by the administration, reflecting the serious interest that Congress has in this activity. According to David Bolka, the new HSARPA director, the agency expects to use the TSWG process “for the near term.” “As HSARPA matures and the Systems Engineering and Development branch of the S&T Directorate stands up, we will assume the majority of rapid prototyping responsibility and will coordinate it internally,” he added.

It is Bolka’s last statement that worries members of Congress. Shays compared the current TSWG structure to a chair with legs of different sizes. “I thought DHS would be the only agency evaluating proposals,” he said. “Why not just keep TSWG in the Department of Homeland Security?”

Edward McCallum, director of DOD’s Combating Terrorism Technology Support Office, explained that homeland security technology needs cut across many sectors and that the tools that derive from TSWG can benefit this diverse constituency. He cited as an example a robot for retrieving and/or detonating bombs that is used by the Pentagon, the FBI, and local bomb squads. He also emphasized that the current investment of $180 million does not preclude each department or agency from pursuing separate counterterrorism R&D programs.

Rep. John F. Tierney (D-Mass.) stated that first responders in his district are at a loss about where to go for the latest technology. And a clearly frustrated Shays asked, “How do you know proposals are vetted and weighed appropriately?”

McCallum attempted to reassure Shays that TSWG would continue to use its well-established three-step process for evaluating technologies, from concept phase to preparation of a full proposal. The idea is to quickly review and winnow down the large number of ideas that are submitted for developing counterterrorism technologies. According to McCallum, only 0.5 to 1 percent of ideas that are initially submitted by companies at the concept level move to the second stage of the review process. He further noted that “the success rate of proposals submitted [after the first two steps] is quite high: perhaps nine out of ten.”

The majority of industry representatives at the hearing generally viewed the TSWG model in a positive light. According to Bruce deGrazia, chairman of the Homeland Security Industries Association, the TSWG process “produces significant time and cost savings” to companies that submit ideas. However, he stated that only 15 percent of their 400 industry members are even familiar with TSWG, and he recommended that the TSWG Web site be directly linked to the DHS Web site and that DHS organize a series of educational seminars.

For more information on the Technical Support Working Group, go to www.tswg.gov.

Bill promotes use of cord blood stem cells in treating disease

A bipartisan quintet of senators has proposed a bill that would promote the use of stem cells derived from cord blood, or blood collected from the umbilical cord and placenta after childbirth, in treating disease.

At an October 17 press conference, Sens. Orrin Hatch (R-Utah), Chris Dodd (D-Conn.), Sam Brownback (R-Kan.), Arlen Specter (R-Penn.), and Dianne Feinstein (D-Calif.), who hold different views on controversial embryonic stem cell research, hailed what they called “a new commitment to developing a national infrastructure of cord blood stem cell collection and research that could, in time, save the lives of thousands of gravely ill Americans.”

The Cord Blood Stem Cell Act of 2003 (S. 1717) would authorize the Health Resources and Services Administration, an arm of the Department of Health and Human Services responsible for improving access to health care, to establish and maintain a National Cord Blood Stem Cell Bank Network through contracts with existing or new cord blood banks that are certified at the federal and state level. The bill would set as a goal the collection of at least 150,000 units of human cord blood stem cells that are as genetically diverse as possible.

Cord blood contains hematopoietic stem cells that are able to differentiate into a number of specialized types of cells such as bone marrow. Since the early 1990s, a number of physicians have conducted cord blood transplants on children suffering from diseases such as leukemia and sickle cell anemia. In addition, a number of private cord blood banks have been established to meet demand from parents interested in preserving the cord blood of newborn babies as insurance in case of disease.

One problem is that little empirical evidence exists to show that stem cells extracted from a donor’s cord blood can be used to help the donor. The majority of the successful transplants have involved cord blood from siblings. The American Academy of Pediatrics (AAP) said in a statement that there is “no evidence of the safety or effectiveness of autologous (self) cord blood transplantation for the treatment of malignant [tumors].” Based on existing research on five diseases, AAP determined that conventional therapy or transplantation from a related donor is more effective than autologous cord blood transplantation. The statement, however, encourages the “philanthropic donation of cord blood for banking at no cost for allogeneic (related or unrelated) transplantation” and research.

S. 1717, which has been dubbed the Hatch/Brownback bill, would be a big step toward expanding public access to therapeutic applications developed by cord blood banks that conduct research. For example, the act would require that up to 10 percent of cord blood collected be available for peer-reviewed research. Furthermore, it would establish a registry system for identifying the blood units in order that health care professionals may easily search for suitable donor matches. Encouraging a genetically diverse collection of samples would improve the odds for positive donor/patient matches. Finally, a board of directors composed of physicians, research scientists, patients, and industry representatives would be created to oversee the network. The bill would provide $15 million in fiscal 2004 to support its establishment and operation.

Rep. Christopher Smith (R-N.J.) introduced a companion bill (H.R. 2852) in the House in July.

President signs $3.7 billion nanotechnology research bill

President Bush on December 3 signed a bill that would authorize the spending of almost $3.7 billion on nanotechnology R&D in five agencies over four years. The 21st Century Nanotechnology Research and Development Act passed the House and Senate during the week of November 17th.

The new spending will be shared by the National Science Foundation (NSF), Department of Energy (DOE), National Aeronautics and Space Administration, National Institute of Standards and Technology, and the Environmental Protection Agency. NSF and DOE will be the primary R&D sponsors, with spending of $1.73 billion and $1.46 billion, respectively. Funding is not scheduled to begin until fiscal year 2005. Management of the program will be coordinated through the National Science and Technology Council, with technical and administrative support from staff within a newly created National Nanotechnology Coordination Office.

A point of contention between the House and Senate was the composition of an external advisory committee to provide additional oversight and assessment of the progress of the research programs. The bill calls for the president to establish or designate a National Nanotechnology Advisory Panel, but follows the House intent of providing the administration with greater flexibility in its composition. The provision, however, strongly emphasizes that panel members should come primarily from academia and that the president should consider recommendations from the scientific community and state and local governments.

The legislation also resolves differences between House and Senate committees over the management of research into the ethical and societal implications of nanotechnology. The bill allows the creation of an American Nanotechnology Preparedness Center responsible for the “conduct and dissemination of studies on the societal, ethical, environmental, educational, legal, and workforce implications of nanotechnology.” Sen. Ron Wyden (D-Ore.) had pushed for the separate center, whereas House Science Committee Chairman Sherwood Boehlert (R-N.Y.) would have kept such responsibilities as an element of the R&D programs within each of the participating agencies.

The final bill, however, fails to allocate any funds for the new center, which will be established through a merit-based competitive process. Wyden’s original bill authorized $5 million a year, which members of Congress believed was arbitrary.

In a press release issued shortly after House passage, Boehlert said that, “The United States is the leader in nanotechnology and must remain so as this new field starts remaking the marketplace. The nanotechnology program will be a model of government, university, industry cooperation, and of coordination, interdisciplinary research and public involvement.”

More information is available on the Web site of the National Nanotechnology Initiative (www.nano.gov).

House renews effort to protect commercial databases

After a number of false starts during the past few years, the House is making another attempt to enact legislation that would clarify the legal rights of commercial database owners in the age of the Internet. But even the new bill, which is an attempt at a compromise, is proving to be controversial.

On October 8, Rep. Howard Coble (R-N.C.) introduced the Database and Collections of Information Misappropriation Act (H.R. 3261), which would make illegal the unauthorized use of a database or, as the proposed text defines it, “a collection of a large number of discrete items of information.” The bill has five cosponsors, including House Judiciary Committee Chairman F. James Sensenbrenner Jr. (R-Wisc.) and House Energy and Commerce Committee Chairman W. J. “Billy” Tauzin (R-La.).

Proponents say the bill is needed to protect the significant investments of time and money involved in creating databases, as well as to spur economic growth by promoting the creation of new databases. Their concerns derive from the ease with which electronic materials can be pirated from a commercial database and made commercially available.

Opponents, however, argue that the electronic format of these works is already protected under other laws, such as the Computer Fraud and Abuse Act, and that enactment of the bill might endanger public service activities such as price comparison Web sites or lists of political candidates’ voting records.

Past versions of database legislation sought to clarify the amount of information from a particular database that could be legally used outside of the database and what remedies would be available if that amount was surpassed. The new bill attempts a “clarification” of technological terms but is in the eyes of many opponents potentially damaging. For example, the bill describes that the amount of data that can be legally used as “quantitatively substantial,” a term that is not defined within the act and thus subject to a wide range of interpretations.

Many in the research community rely on databases to share scientific information and have long supported the “fair use” principle that allows factual information to be publicly available. For example, research projects on climate change and the human genome have used publicly accessible databases to facilitate large-scale collaborations by far-flung scientists. Hence, the scientific and library communities have expressed concern about any new database protection legislation.

H.R. 3261 includes a provision that would protect nonprofit educational, scientific, or research institutions from litigation should they make available certain portions of a database. However, these exemptions require that a court first hear a claim against a party to determine if the sharing is “reasonable under the circumstances, taking into consideration the customary practices associated with such uses of such database[s] by nonprofit educational, scientific or research institutions and other factors that the court determines relevant.”

Opponents express concern that such ambiguous language could be used to create delays as courts determine whether research projects fall under the exemption. Similarly, they argue that the legislation fails to clarify the fate of government databases that are owned or cocreated with other entities such as universities, and that these works may fall under the government exemption but may ultimately be open to recapture under other sections of the act.

Database protection legislation was first introduced in the 104th Congress, and proposals have been floated in each Congress since. In the 107th, no bills were formally introduced, but much action took place behind the scenes, leading to the current effort. At an October 16 markup, the House Judiciary Committee’s Subcommittee on Courts, the Internet and Intellectual Property approved the bill by a 10 to 4 vote.

Senate passes bill barring genetic discrimination

By a vote of 95 to 0, the Senate on October 14 voted to prohibit employment and health insurance discrimination based on genetic information. The Genetic Information Nondiscrimination Act (S. 1053) would prevent health insurance providers and employers from using an individual’s genetic predisposition to a disease as a basis for denying access to health coverage or a job.

Although the completion of the human genome sequence has raised hopes of a medical revolution, the bill’s supporters said that to take full advantage of this achievement, the public must be assured that genetic information will be used to improve health and not to discriminate unfairly. They argued that many individuals considering genetic tests have decided against it because of discrimination fears.

“Genetic screening is a powerful tool and can impart highly sensitive and very personal information,” said Senate Majority Leader Bill Frist (R-Tenn.), a key supporter of the legislation. “The fear of genetic discrimination has the potential to prevent individuals from participating in research studies, from taking advantage of new genetic technologies, or even from discovering that they are not at high risk for genetically related illnesses.”

Advocates for the health insurance industry, which opposed the bill, said that there is little evidence that genetic discrimination is actually occurring, and that the Health Insurance Portability and Accountability Act already provides sufficient protection.

Although President Bush has announced his support for the bill, it is unclear whether the House will go along. H.R. 1910, a bill authored by Rep. Louise Slaughter (D-N.Y.) and backed by House Administration Committee Chairman Bob Ney (R-Ohio), is similar to the Senate bill but also provides a right to sue for misuse of genetic information. It has collected 228 cosponsors from both sides of the aisle.


“From the Hill” is prepared by the Center for Science, Technology, and Congress at the American Association for the Advancement of Science (www.aaas.org/spp) in Washington, D.C., and is based on articles from the center’s bulletin Science & Technology in Congress.

Clean Air and the Politics of Coal

Air quality policy–technically complex and always contentious–has become the focus of bitter controversy. The public debate is about an Environmental Protection Agency (EPA) program called New Source Review (NSR), which regulates emissions from industrial facilities. There’s a surprising consensus about how to fix NSR: emissions caps and a trading system. But underlying the debate and preventing compromise is deep disagreement about the future of the powerful coal industry. The administration would protect coal, whereas others give precedence to public health.

The EPA has issued a series of highly complicated new rules that will dramatically weaken NSR. Of particular concern are coal-fired electric plants that have not complied with air quality standards that are 25 years old. Environmentalists cite the new rules as a vivid example of how the Bush administration is dismantling environmental protection.

The administration responds that the old rules are too costly and prevent modernization of facilities in ways that would efficiently reduce emissions. It proposes a “Clear Skies” bill that would establish a cap-and-trade system for mercury and nitrogen oxides but would relax somewhat the schedule for reductions of sulfur dioxide, which already operates under a highly successful cap-and-trade system. Clear Skies would not include emissions of carbon dioxide, the main greenhouse gas implicated in human-induced climate change.

Congress has been considering bills to address air quality, climate change, and changes to NSR. But for now it cannot agree on what to do.

It is time to reform NSR, which exemplifies command-and-control regulations that set forth in excruciating detail how firms must reduce emissions. A cap-and-trade system would replace bureaucratic guidance by setting limits on emissions from major sources and allowing facilities flexibility to install pollution controls, switch to cleaner fuels, redesign processes and products to avoid pollution, or pay other plants to do these things.

But the Bush administration’s approach would allow old coal-fired plants to keep polluting while the industry develops a technological fix to reduce emissions of greenhouse gases. Until that fix is achieved–possibly decades from now–aging plants using outdated technology would be able to continue to burn cheap, dirty coal.

If we are serious about reducing current levels of air pollution–and we should be, given the clear evidence about the effects of coal-fired electricity on human health and the environment–we can’t afford to wait. We favor requiring old plants to clean up now and then installing a flexible, multitiered cap-and-trade system. This would create market incentives for coal and utility industries to develop clean technology rather than continue hiding behind government rules.

Costly grandfathers

The Clean Air Act of 1970, passed soon after the first Earth Day, set the pattern for U.S. environmental law. The statutes adopted during the next several years required that major industrial plants install pollution control equipment to reduce emissions at the tail end of production processes–from smokestacks, tailpipes, and drainage pipes. These requirements ushered in terms such as “best available control technology” and “lowest achievable emissions rate.” Although varying in their stringency and workability, these regulations all rely on detailed industry-by-industry specifications that firms must adopt.

This approach exerts some pressure for innovation. When better control technologies come on the market, presumably they are certified and raise standards. But the system gives firms no incentive to custom-design more effective technologies for special situations, to reexamine their entire production process, or to improve beyond required levels.

When Congress wrote the Clean Air Act, it exempted old plants, believing that the most economical time to add pollution controls was when plants were expanding production or modifying processes. This makes sense for cars, which typically are replaced every few years. But it soon became clear that many grandfathered industrial facilities, especially coal-fired power plants, were contributing a large share of the nation’s air pollution and were not going to clean themselves up any time soon.

To force cleanup, Congress in 1977 required facilities to install state-of-the art technology as part of modifications that include any physical change or any change in their method of operation that increases the amount of any air pollutant. Taken literally, this was a very high standard–too high to be practical. Presumably, Congress did not mean to trigger the NSR process when a plant installed a new light bulb. So a few years later, when Democrats controlled both the presidency and Congress, EPA excluded “routine maintenance, repair, and replacement” from the definition of modification.

This opened a battle about the difference between routine maintenance and other modifications. Consultants to the electric utility industry began promoting multimillion-dollar “life extension projects” that would allow companies to “repair” coal-fired power plants without triggering the NSR process. The EPA took one company to court, asserting that changes in its plant were not routine maintenance or repair. The judge ruled in 1988 in favor of the EPA, saying that, “the statutory scheme intends to grandfather existing industries; but the provisions concerning modifications indicate that this is not to constitute a perpetual immunity from all standards.”

Even after this decision, litigation and lobbying continued as the EPA labored to rewrite voluminous guidance to its staff and to states that manage NSR under EPA oversight. Industry claimed that uncertainty about definitions and procedures kept many firms from investing in repairs and expansions. For example, the National Coal Council, an industry-funded adviser to the Department of Energy (DOE), claimed in 2000 that the utility industry was delaying projects that could expand generating capacity by 10 percent, while at the same time reducing emissions. The EPA encouraged companies to seek advice about whether proposed modifications were routine maintenance. But there is no requirement that firms report routine maintenance projects to the EPA or to state agencies. Many firms went ahead without asking for advice or permits, and “don’t ask, don’t tell” became the operative approach.

In 1993, the Clinton administration reorganized the EPA, establishing a separate Office of Enforcement and Compliance Assistance. Seeking to target its resources to get the biggest environmental gains, this office analyzed emissions and the number of permits and enforcement actions for different industries. This effort showed that many coal-fired power plants, refineries, steel mini-mills, chemical manufacturers, and pulp and paper mills had significantly expanded production without going through the NSR process. The EPA began investigating electric utilities in 1996, and three years later the Justice Department filed the first of 51 enforcement cases related to NSR, including 14 against electric utilities.

Trade associations for electric utilities and many individual companies vigorously protested the litigation. They claimed that the EPA was using a new interpretation of routine maintenance and had not given “fair notice.” According to their argument, EPA and state inspectors had been visiting their plants for years and must have seen life extension projects, but had never insisted on the need for NSR. Nevertheless, seven electric utilities and 14 refineries and other factories have settled out of court. They agreed to pay $79 million in penalties and to spend $4.6 billion cleaning up their facilities and $93 million on other environmental improvements.

In August 2003 came the first federal court decisions about the enforcement cases. FirstEnergy, the defendant, claimed that the changes made to its power plants were “the kinds of routine maintenance that every single coal-fired power plant does.” The court, however, found that the company’s interpretation of routine maintenance would be “in direct conflict with the superceding and controlling language of the Clean Air Act.” Later that month, a different federal court advised lawyers that “routine maintenance” should be interpreted industrywide, so that one plant could not be faulted if it followed the same practices as other firms. Still, the Justice Department and the EPA have not lost a single case.

Underlying the debate and preventing compromise is deep disagreement about the future of the powerful coal industry.

Under the Bush administration, the EPA initially promised to continue pushing these types of enforcement cases. But in August 2003, the agency revised its definition of routine maintenance. The new definition exempts any change that does not exceed 20 percent of the replacement value of the entire process unit, any replacement of components of a process unit with identical or functionally equivalent components, any replacement that does not change the basic design parameters of the process, and any replacement that does not cause the unit to exceed emissions limits. This new definition will enable plants to conduct extensive modernization without installing best available control technologies. The “replacement value” of a 30-year-old power plant may be more than its original cost. By rebuilding a unit in stages over several years, each stage costing less than 20 percent of the replacement value, a plant might be completely rebuilt without incorporating state-of-the-art pollution controls.

Four months before the new rules, the National Academy of Public Administration (NAPA), a nonprofit organization chartered by Congress to advise agencies facing difficult problems of management and governance, issued a report on the NSR process. The fact that many old plants had not upgraded pollution control equipment was seen as a clear failure. The report concluded that the legislative history of the Clean Air Act, along with subsequent regulations and judicial decisions, supports a strict interpretation of routine maintenance. It calls for Congress to end grandfathering and for the EPA and the Justice Department to aggressively pursue enforcement cases. Within the next 10 years, all major sources of pollution that had not obtained an NSR permit since 1977 would have to lower emissions to levels that could be achieved by best available control technology.

The report also recommended that the EPA require firms to monitor and disclose their emissions publicly. This would end the “don’t ask, don’t tell” approach.

High stakes

The stakes are very high in the struggle to regulate grandfathered plants, especially coal-fired electric utilities. End-of-the-pipe controls for air emissions are often costly. An industry-financed report to the Energy Information Agency said in 2000 that complying with NSR at old coal-fired electric power plants would cost an estimated $65 billion. For individual plants, installing best available control technology may cost hundreds of millions of dollars.

The public also has a big stake. If power plants reduced emissions of sulfur dioxide and nitrogen oxides by 70 percent from their 1997 levels–a reasonable assumption if all old power plants had to meet the standards for new facilities–recent EPA estimates are that cleanup could prevent 14,000 premature deaths from asthma, other respiratory diseases, and cancer each year.

Do these figures suggest that forcing old plants to clean up would be an efficient use of resources? Caution is necessary; both sides have commissioned numerous studies that make different assumptions and yield different estimates. One simple evaluation strongly suggests that cleanup would be justified on cost/benefit grounds. The federal government generally uses estimates of $3 million to $6 million for the value of a life saved. At this rate, 14,000 deaths prevented in one year would justify more than half of the total cost of installing best available control technologies, and deaths prevented in future years would come free. There would be other benefits as well, such as fewer short-term health problems and less acid rain and haze. But are there better, cheaper ways to save lives and clean the air?

In 1990, Congress amended several sections of the Clean Air Act. It decided not to tangle with NSR but did establish a precedent for later reforms. The amendments set national limits on emissions of sulfur dioxide, allocated shares of this cap to existing plants, and allowed plants either to use their full share or to buy shares from other companies. Under the cap-and-trade system, cleanup costs proved to be far lower than expected. Many plants switched from coal to natural gas; others switched from the high-sulfur coal produced in the Midwest to low-sulfur coal from Wyoming and Montana. Competition spurred the manufacturers of pollution-control equipment to innovate and cut prices. Utilities found more efficient ways of operating power plants. And some firms bought emissions rights, which sold at lower prices than expected.

The 1990 amendments left open the possibility that EPA regulations might allow individual firms to conduct a scaled-down version of cap-and-trade, a process that became known as “plantwide applicable limits” (PALs). Many large industrial facilities have dozens of operating units that discharge pollutants, and the plants must obtain a permit for each unit. PALs allow a plant to obtain one permit for the whole facility. This gives companies an incentive to reduce emissions at the cheapest point, to develop new technologies, or to change processes in ways that reduce pollution cheaply.

Industries that change product design frequently–for example, semiconductors and pharmaceuticals–have been particularly interested in PALs. For them, the complexities and uncertainties of NSR are a major problem, because it can take a year or more to apply for a permit and get a decision. PALs would allow firms to change their production without getting a new permit as long as total emissions from the plant do not exceed the cap. PALs are less critical for electric power plants, because most of their emissions come from a few stacks.

The NAPA report endorsed PALs as well as cap-and-trade programs after plants clean up. It proposed a three-tier system that would include statutory changes and provide more flexibility for plants that can install monitoring equipment to ensure that trading is based on solid information.

Tier 1, Cap and Trade, would feature a national or regional multipollutant system for all fossil fuel­fired power plants, industrial boilers, and similar facilities that can monitor emissions continuously or model their emissions reliably.

Tier 2, Cap and Net, would kick in where continuous monitoring is not economically or technologically feasible but where emissions can nevertheless be reliably monitored or modeled. Emission limits for a facility would be established initially based on the reductions that could be achieved by state-of-the art equipment. These caps would cover all sources of emissions from the facility and thus function as PALs. The facility could then modify any part of its operation as long as its permit lasts (presumably 10 years) without an NSR permit, as long as the emissions cap is not exceeded.

Tier 3, Unit Cap, would set limits for sources not included in the first two tiers.

During the Clinton administration, the EPA experimented with PALs but was not able to negotiate an agreement with industry and environmentalists concerning their widespread use. The regulations adopted by Bush’s EPA authorize PALs but relax NSR requirements.

Some analysts have suggested that caps should be based on meeting current state-of-the-art emissions standards and then should decline over time to capture at least some of the emissions reductions that NSR would normally require. The new regulations are far more liberal. They allow facilities to set emissions baselines at the level of emissions during any two-year period during the previous 10 years. NSR baselines had been based on emissions during the previous two years of operation. The new rules raise the possibility that a PAL cap might be significantly higher than recent emissions. Further, facilities are not required to reduce emissions over the life of the PAL. This allows facilities to continue operating at high emissions levels for several years.

The Bush administration has also proposed an amendment to the Clean Air Act to allow a broader cap-and-trade system. Congress is now considering this Clear Skies proposal, as well as other bills that would require quicker, deeper cuts in emissions. As if this were not sufficiently difficult, the congressional debate is now further complicated by the wild-card question of how much consideration to give to climate change.

The carbon question

Within many industries, there are firms that have struggled to understand and comply with the rigid and complex NSR requirements. The new regulations adopted in 2003 will provide substantial relief. The electric power industry has the biggest stake in further changes. Electric utilities produce 66 percent of the nation’s sulfur dioxide emissions, 37 percent of the carbon dioxide, and 26 percent of the nitrogen oxides. Most of these emissions come from coal-fired plants, of which 80 percent are grandfathered. They produce 55 percent of the nation’s electricity.

The Bush approach would allow old coal-fired plants to keep polluting while the industry develops a technological fix to reduce emissions.

At this point, most old coal-fired plants are inexpensive to run. Their owners earned enough long ago to cover the costs of construction, and coal is cheap. Thus, they can sell electricity at relatively low prices and still earn healthy profits, while at the same time helping local manufacturing plants keep costs down. But the future of old coal-fired plants is uncertain.

If the EPA were to enforce NSR requirements as it began to do under the Clinton administration, then some old coal-fired plants might install costly new pollution control devices, while others would switch to natural gas or shut down. However, additional regulations controlling carbon dioxide emissions to deal with climate change could force coal-fired utilities to invest in very different technologies than those required by NSR. Pollution controls for gases regulated today could easily become lost investments if carbon dioxide is regulated. If lenders believe that carbon eventually will be regulated, coal-fired utilities might have to pay a premium now to borrow funds for cleaning up to meet NSR requirements. Thus, if one assumes that the United States eventually will regulate emissions of carbon dioxide, it makes little sense to force coal-fired plants to clean up under current law. Instead Congress should create a cap-and-trade system now for carbon dioxide as well as for pollutants covered by current law.

During the presidential campaign, George W. Bush endorsed cap-and-trade legislation for carbon dioxide as well as for nitrogen oxides, mercury, and sulfur dioxide. Soon after the election, however, electric utilities and the coal industry made a concerted effort to change his position. In February 2001, the president announced his opposition to the regulation of greenhouses gases and to the Kyoto Protocol on climate change, an internationally proposed agreement to reduce such emissions. The administration’s Clear Skies initiative would set caps on nitrogen oxides and mercury to substitute for long-planned command-and-control regulation, and it would relax somewhat the schedule for reductions in sulfur dioxide. Environmentalists claim this would roll back environmental protections even further than the new NSR regulations. The administration argues that the changes, on balance, would keep limits tight and that allowing trades of nitrogen oxide and mercury would sharply reduce the costs of cleanup.

In taking this approach, the Bush administration is putting the United States on a very different path from that taken by Europe in combating climate change. During the 1990s, several countries in Europe reduced their use of coal sharply. France built several nuclear power stations to generate electricity, while Britain, Germany, and the Netherlands switched to newly discovered natural gas from the North Sea or from Russia. The United States has far larger deposits of coal, a quarter of the world’s reserves. In the 1990s, coal production remained high even as utilities built many new natural gas plants that compete with coal. The result is excess capacity for generating electricity in parts of the country, especially the South.

The administration’s approach to NSR, as well as to other air quality issues, seems to rely on developing a technological fix for carbon dioxide that will enable coal to remain a mainstay of energy production for decades. The proposed EPA regulations will let coal-fired plants postpone the difficult choice about which pollutants to reduce. In the meantime, these plants can continue operating indefinitely burning cheap, dirty coal and can preserve their capital for later investment in so-called “clean coal” technologies.

Indeed, the administration has invested heavily in clean-coal technology, with DOE citing the president’s “$2-billion commitment to coal” and calling coal-fired electricity generation plants the “cornerstone of America’s central power system.” DOE issued its initial solicitation for matching grants for clean-coal projects in March 2002, supported by more than $300 million, stating that the ultimate goal of the research efforts is developing “an emission-free coal plant of the future.” The research agenda contemplates the deployment of several new technologies to meet this goal, including coal gasification, advanced combustion, fuel cells, and sequestration of carbon dioxide.

If the past is prologue, however, the outlook for such efforts may not be rosy. The federal government has provided substantial subsidies–research grants, tax breaks, and loans–for clean-coal projects for two decades without achieving a major breakthrough. Nonetheless, elements of the coal industry and some environmentalists are enthusiastic about the promise of integrated gasification combined-cycle (IGCC) plants, which use a technology different from other clean-coal projects. The National Coal Council reported in 2000 that new IGCC plants can compete with natural gas plants when gas costs $3.75 to $4 per million cubic feet, and prices in 2003 have generally exceeded $5 per million cubic feet.

IGCC plants heat coal, water or steam, and oxygen to high temperatures under high pressure. They produce a gas of hydrogen and carbon monoxide that can be used in today’s natural gas-fired electric plants, with minor modifications. They also can produce high-grade chemicals and diesel fuel, and they generate as waste a slag that is less voluminous and less reactive than the vast quantities of waste from conventional end-of-the-pipe technologies.

Carbon sequestration is the other part of the government’s coal-based strategy. The United States, the European Union, Russia, China, and several other countries recently agreed to create the Carbon Sequestration Forum, whose aim is to stimulate research into sequestration technologies to clean up fossil fuels by capturing carbon dioxide at the source and storing it for thousands of years deep underground.

There is plenty of room for skepticism about clean coal. Will new technologies prove adequate to reduce air and other forms of pollution from coal? Can IGCC plants match the record of coal-fired units for delivering electricity reliably and economically? Can carbon dioxide be economically recovered from combustion processes and stored for centuries in seismically stable underground reserves?

If clean coal ultimately proves economically feasible and environmentally sound, then this resource may be an essential part of a global strategy to address climate change. India and China will soon account for almost 40 percent of the world’s population. Both countries have substantial deposits of coal, but little oil and gas. Their economies are growing rapidly and their use of coal is rising sharply. In the long run, the world probably must shift from fossil fuels to alternative sources of energy, but natural gas and perhaps clean coal might be transitional fuels for two or more decades.

Politics at work

Regional interests add to this complex picture. Most of the coal in the United States is in the Midwest and northern Appalachians, the northern Great Plains, and Texas. The utilities that burn coal are concentrated in the Midwest and parts of the South. Indeed, cheap coal-fired electricity is a bulwark of the economy in these regions, and for decades it has been a key factor in attracting and retaining manufacturing plants. As politicians from the Northeast are fond of pointing out, pollution from these coal-fired plants drifts east and north into Pennsylvania, New York, New Jersey, and New England. However, the Northeast cannot blame all of its air quality problems on other states, because cars and trucks contribute substantial pollution, and several of the oldest, dirtiest coal-fired plants are located in this region.

Regional disparities in the costs and benefits of cheap coal-fired electricity lead to conflict. In 1970, Sen. Edmund Muskie, a Democrat from Maine, led the fight to write the Clean Air Act, against resistance from his committee chair, Sen. Randolph Jennings of West Virginia. Today, members of Congress from the Northeast are among the strongest critics of the administration’s NSR reforms, and many members from Ohio, West Virginia, Illinois, and Montana are supporters. The battle lines are not really partisan; they are based on the fuels burned to generate electricity in different states and thus are largely regional. New York’s Republican governor and Maine’s two Republican senators are vocal critics of NSR reforms, and Senate Democrats from West Virginia, Illinois, and Montana are quiet. Attorneys general from 15 states, mostly in the Northeast, have filed lawsuits saying that the administration’s new NSR rules violate the Clean Air Act.

These regional differences have immense political importance. Midwest states, including Ohio, Illinois, Michigan, and Pennsylvania, have historically been swing states in presidential elections. In 2000, coal-mining West Virginia went Republican for the first time in three decades, and Tennessee, where the Tennessee Valley Authority has several coal-fired plants, voted Republican as well.

There’s a surprising consensus about how to fix NSR: emissions caps and a trading system.

Regional differences also prompt varying views among electric utilities. Utilities that produce electricity in nuclear plants or with hydropower emit very little of the pollutants covered by NSR. These utilities, along with those that have invested in new gas-fired turbines, compete with coal-fired utilities now that many states have deregulated the generation and transmission of electricity. Utilities that generate electricity with nuclear, hydro, or gas might capture new customers if coal-fired utilities are forced to clean up and raise rates.

Utilities are also divided by their views on climate change. Some utilities have bitten the bullet and endorsed regulation of greenhouse gases now to avoid a second cleanup. In the late 1990s, American Electric Power, which is based in Ohio and burns more coal than any other utility, endorsed federal legislation to regulate carbon dioxide. In contrast, the Southern Company, which burns coal in Georgia, Alabama, Mississippi, and Florida, has lobbied aggressively not to regulate carbon dioxide.

Guiding principles

With so much at stake, maneuvering has been constant, will continue for some time, and may figure in the presidential campaign. But sooner or later federal lawmakers must decide whether to bet on clean coal or insist that coal-fired plants clean up now. Four principles may help guide decisions.

The first principle is to move away from command-and-control regulation as epitomized by NSR. This shift must be made carefully, as NSR is part of a tightly woven fabric of environmental statutes. The NSR process is working far more effectively for new plants than for old. Indeed, these rules play a critical role in managing local air quality. States must write and implement plans to clean the air over metropolitan regions that are home to more than 120 million people, where air quality has improved but still does not meet national standards. The public resists requirements for cleaner cars, fewer roads, or tighter limits on small businesses and backyard grills, which are generally more costly than policies aimed at cleaning up coal-fired power plants. Many states have found it easier to squeeze small increments of cleanup from big facilities, so the companies that own these facilities have a direct interest in helping states educate the public about the need for everyone to share the burden. Furthermore, the Clean Air Act provides that a new plant must not only install best available control technology but also must finance cleanup by other existing sources. If NSR were wiped away, state regulators would lose this tool. For such reasons, NSR should be retained for new plants at least for the time being.

Second, strict enforcement of law is essential. This is the foundation for insisting on cleaning up grandfathered plants. There are so many sources of pollution and so many opportunities for preventing and reducing pollution that no one–the EPA, the states, or environmental activists–can keep track of them all. Environmental regulation depends on voluntary compliance and public trust. Strong targeted enforcement that stops egregious violations, deters others from violating the law, and builds public trust in the regulatory program is essential. Congress clearly expected that grandfathered plants would eventually clean up or shut down, but many have skirted the law. Twenty-five years is enough. They should clean up. When they do, the benefits to public health and to public support for and compliance with environmental regulations will be substantial.

The third principle is to build a greater capacity to adapt to changing conditions. Cap-and-trade programs and PALs give incentives for innovation throughout the production process, not just end-of-the-pipe pollution control. The next step is to set long-term goals for climate change policy. The United States has not yet had a robust national debate about how to fuel its economy in ways that will reduce the risks of climate change. Public debate about climate has focused primarily on the Kyoto Protocol, on the growing rift with the European Union, on whether developing nations should reduce their greenhouse emissions, and on high-profile issues such as oil exploration of the Arctic National Wildlife Refuge. It is time to set 50-year goals for global concentrations and national emissions of greenhouse gases and to begin introducing caps on carbon dioxide into the Clean Air Act.

Finally, any solution must be structured to include opportunities for compensating regions and industries that lose when policies shift. This was a key to the success of the Clean Air Act. During the 1980s, Congress made repeated unsuccessful efforts to address worries about acid rain. However, when the first Bush administration endorsed a cap-and-trade system, Congress was able to pass a bill, partly because it found a way to funnel money to “losing” coal states and coal-fired plants in the form of job training for coal miners and generous emission allowances for those plants. The devil in any political compromise is always in the details, and in environmental matters these details often concern money. Perhaps this is why the battle over NSR has been so polarized; the battles are played out in court, where it is harder to compensate losers.

The debate over NSR takes the form of classic struggles: environmentalists versus industry about how much to spend to reduce pollution, and the Northeast versus the Midwest about who should pay. The disagreement about climate change is more difficult, reflecting not only differences in values and interests but also differences in judgments about how soon action must be taken to address the issue. But both are matters on which compromise should be possible if politicians have the will to think boldly and structure debate properly.

The key is to frame the issues in ways that the public can understand. Rather than a series of changes in highly technical regulations, the administration or Congress should put the central question squarely to the public. Should the federal government exempt old coal-fired plants from clean air standards in order to ensure that coal remains a low-cost option until clean coal technologies are in place? Or should the government reform NSR but stand by its statutory commitment to protect public health?”

Viral Trade and Global Public Health

In June 2003, some 80 people in three Midwestern states were stricken with monkeypox. Until then, the disease–a sometimes fatal viral infection related to smallpox–had never been seen outside Central and West Africa. In the United States, the virus is believed to have spread to humans from pet prairie dogs, which in turn were likely infected by a giant Gambian rat held by a Chicago exotic pet dealer. So far, no one has died. But if the outbreak is confirmed to be an inadvertent byproduct of trade, then it is yet another warning sign of the growing international exchange of viruses.

The increase in viral traffic is the result of two converging trends: Deadly new viruses are appearing at an accelerating rate, and viruses are traveling around the world faster than ever before. During the past several decades, a number of new or changing viruses have emerged, wreaking havoc wherever they strike. The deadly avian influenza virus that hit Hong Kong in 1997-1998 and the Netherlands in 2003, for example, required the mass slaughter of poultry to control the outbreaks. And HIV has decimated much of the working population in sub-Saharan Africa and threatens other parts of the world.

Today, a virus that emerges in one place can quickly find its way to any other place on earth. Witness the discovery of the West Nile virus in New York City in 1999 or the recent global spread of severe acute respiratory syndrome (SARS). SARS illustrates one of the new realities of the global economy: Except for war, terrorism, and natural disasters, nothing stops global trade and travel as effectively as a deadly disease outbreak.

Such outbreaks will become more common unless stringent steps are taken to prevent them. The biggest obstacle to effective control is the unevenness of public health capabilities around the world. Although an individual country might attempt to stem viral outbreaks within its borders, one nation’s public health laws and disease control efforts are only as good as those of its neighbors. This is a global problem, and it requires a global solution. The only way to ensure that all countries pull their weight is to allow the World Health Organization (WHO) to establish stringent global public health laws and standards. Although enforcement would be difficult, another global body, the World Trade Organization (WTO), could provide strong incentives by allowing only countries that adhere to these laws and standards to participate fully in global trade. By working together, these two organizations could create effective strategies for preventing and controlling the inadvertent trade of viral infections.

In the battle between humans and viruses, viruses have all the advantages. One reason is the sheer number of viral particles. The oceans alone contain more than 1030 bacterial viruses. That number is larger than the number of stars in the observable universe, and it describes just a small subset of viruses: those that infect only bacteria. And most viruses are completely unknown. According to Lynn Enquist, a virologist at Princeton University, scientists have identified only 1 percent of the viruses on the planet.

An infected person or animal can shed enormous numbers of new viral particles. For example, one 35-ton gray whale, infected with a virus that causes diarrhea, has been estimated to excrete over 1013 new viruses into the ocean each day. This class of virus, the noroviruses, can infect terrestrial hosts, including humans. They have been implicated in outbreaks of illness aboard cruise ships. On a lesser scale, one human with end-stage AIDS can produce a billion new viral particles per day. Given that upward of 40 million people worldwide are estimated have HIV infection or AIDS, one gets an idea of what humans are up against.

Viruses are the most successful life form on the planet, yet they are not even technically alive. Their complex chemical structures consist of a strand or two of DNA or RNA and a protein coat. They are parasites; they replicate by entering the cells of bacteria, plants, or animals. Once inside their host, they hijack the cell’s machinery to produce new viral progeny.

Viruses also have an evolutionary advantage. Their replication is sloppy. Mutations, recombinations, and reassortments of viral genetic information occur at high rates, helping viral offspring to adapt and survive. It is these inherent viral properties of parasitism and mutation that make it difficult to develop effective drugs and vaccines. Viruses are rarely conquered and eliminated. Smallpox is the exception, and even it could be lying in wait as a potential bioweapon.

Many factors contribute to the increasing emergence of previously unknown viruses into human populations. Global deforestation, destruction of natural habitats, and human population pressures are well-described worldwide problems. But viruses can also be introduced into human populations by certain cultural and trade practices. Because many of the newly discovered viruses spread between animals, either through direct transmission or through insects, anything that alters the fragile animal-viral ecologic cycles is dangerous. Some of the worst offenders include the slaughter and consumption of wild animals, the trade of exotic animals as pets and laboratory specimens, and the trade and dumping of mosquito-infested scrap tires.

Bush meat and cannibalism

The practice of slaughtering and eating wild animals has been blamed for both the AIDS and the SARS epidemics. Researchers found that a substantial fraction of wild monkeys in Cameroon are infected with SIV, the simian cousin of HIV, and that the humans who hunt them are exposed to a wide range of viruses. HIV itself has been isolated from common chimpanzees, which are believed to be the original source of the AIDS pandemic after hunters killed and ate them. Ironically, a recent article in Science suggests that chimpanzees acquired their SIV from monkeys they had killed and eaten.

Despite the AIDS epidemic, many African leaders have done little to halt the spread of the disease or to decrease the risk of new viruses affecting humans. In Africa, bush meat is still widely eaten, often in preference to domestic meat from cows, pigs, and chickens. Although great apes such as chimpanzees and gorillas make up only about 1 percent of the game caught in the forests in remote areas of western and equatorial Africa, their meat poses the greatest risk to human health.

Great apes are humankind’s closest living relatives, sharing more than 98 percent of our DNA. Viruses that infect and sicken these animals are therefore more than likely to afflict humans, and vice versa. For example, the Ebola virus, one of the deadliest viruses to humans, is now killing many great apes. Because Ebola is transmitted through contact with blood and bodily fluids, the virus would certainly spread to humans who slaughtered and consumed infected apes.

Previous experience suggests that unhealthy cultural practices can be stopped through education and well-enforced legal bans. One example is the epidemic of kuru in Papua New Guinea. Kuru, a debilitating brain disease similar to mad cow disease, was spread by the custom of eating dead relatives. It produces holes in the brain like those in Swiss cheese. Most of the victims were women, who traditionally ate the brain and internal organs, the most heavily infected parts. Because kuru has a long incubation period, it was not obvious to its victims that the illness had come about from eating unsafe meat. The practice of cannibalism was banned once this was known, and kuru epidemics disappeared.

AIDS, too, has a long incubation period that can obscure the process of cause and effect. So just as educational efforts and cultural reforms were vital in stopping kuru in Papua New Guinea, similar measures could help fight the transmission of AIDS from apes to humans in Africa, where trade in ape meat is illegal but still practiced.

The Chinese commonly eat exotic wild animals, oblivious to the dangers of infection. The virus that causes SARS has been identified in the masked palm civet, a cat-sized animal that the Chinese consider a delicacy. Several of the first SARS victims were chefs, and all six of the initial patients linked to SARS outbreaks in the Guangdong province had handled or eaten wild animals before becoming ill.

In response to the SARS epidemic, the Chinese government recently banned the capture, transport, sale, and purchase of virtually all wild animals, according to the Wall Street Journal. But open-air markets in China are reportedly still filled with animals of all species–some covered with the feces of adjacent caged animals, a perfect setup for viral transmission from one species to another. Clearly, China’s new law will be a challenge to enforce, particularly if inspections are conspicuous and vendors have time to hide their illegal wares.

If a nation cannot handle the easy diseases, it stands little chance of handling the difficult, unexpected, exotic diseases.

Live imports

As the case of monkeypox shows, humans do not have to kill or eat wild animals to acquire their viruses. The trade in exotic animals as laboratory research specimens or as pets is just as dangerous. Monkeys make valuable laboratory specimens because of their physiological similarity to humans. But that same attribute puts human handlers at risk of contracting monkeys’ diseases.

In the 1960s, monkeys from Uganda were imported to Marburg, Germany, for use in vaccine development. The monkeys turned out to be infected with deadly viral hemorrhagic fever, which spread to animal handlers and technicians. Many people died as a result of the outbreak, and the causative agent, a relative of the Ebola virus, became known as the Marburg virus. Another outbreak of an Ebola-related virus occurred in laboratory monkeys imported from the Philippines to Reston, Virginia, in 1989. Luckily, the virus was not deadly to humans, but a warehouseful of monkeys had to be slaughtered.

In 1997, a young female worker at a primate research center died from a herpes B virus infection 42 days after a macaque splashed her right eye with what was presumed to be fecal matter. Macaques are frequently used in biomedical research, and more than 80 percent of them carry the herpes B virus. In some 40 reported cases, macaques have infected humans with the virus, which can cause a potentially fatal brain infection, meningoencephalitis. Before antiviral agents became available, the death rate was over 70 percent. Survivors often suffer permanent neurologic damage. Even though keeping these animals as pets has been illegal in the United States since 1975, the Centers for Disease Control and Prevention has received numerous reports of macaque bites in private households during the past decade.

Stricter regulation and legislation could help. The federal government has allowed the unregulated import of exotic animals to “spiral out of control,” as Wayne Pacelle, senior vice president of the Humane Society of the United States, has put it. In response to the monkeypox outbreak in the Midwest, the U.S. government has banned the sale of prairie dogs and the importation of rodents (including the offending Gambian giant pouched rat species) from Africa. Two senators have called for a special committee to look into the regulations governing the importation of exotic pets. However, although these responses make sense in the short term, they do not address the problem in the long run. The real dangers of viral outbreaks suggest that a ban on imported pets should extend to all wild animals. As usual, enforcement will be another matter entirely. According to the U.S. Fish and Wildlife Service, the illegal wildlife trade in the United States amounts to $3 billion annually. Primates are believed to constitute a significant part of that trade.

Travel by tire

Some viruses are transmitted from animals to humans by insects such as mosquitoes. Whether or not they carry viruses, mosquitoes often manage to hitch a ride in scrap tires during shipping. Because their interiors are dark and often contain standing water, scrap tires make excellent incubators for mosquito eggs. In Taiwan, such tires have been found to serve as breeding sites for mosquitoes that transmit dengue fever.

A 1992 National Academy of Sciences report on emerging infections found that the United States generates a quarter of a billion discarded tires each year and imports several million more. Because the market for shredded tires is small, fewer than 5 percent of the world’s scrap tires are recycled. But even a modest level of international trade in scrap tires poses dangers. Traveling in tires, exotic mosquitoes from Asia have managed to spread to distant points. Some have taken up permanent residence.

In 1985, the Asian tiger mosquito, Aedes albopictus–a known vector for dengue fever, yellow fever, and viral encephalitis–was found to have ridden in a shipment of scrap tires from Japan to the United States. This mosquito has now been found in 16 states. In New Zealand, a 1992-1993 survey published in the Journal of the American Mosquito Control Association found mosquito-infected scrap tires aboard five ships from Japan and one from Australia. A similar hunt in South Africa turned up the immature stages of various mosquitoes in a shipment of tires from Japan. Scrap tire shipments have also enabled the yellow fever vector mosquito, Aedes aegypti, to establish itself in previously unaffected areas of Pakistan. Although those mosquitoes do not cause disease by themselves, they are essentially tinder waiting for a viral match.

A few governments are trying to do something about the scrap tire problem. Taiwan, for example, requires tire manufacturers to prepay deposits or charges according to the size of the tires they produce. The funds are then used to recover and recycle the tires. In Kaohsiung City, more than 80,000 scrap tires have been collected to make an ocean jetty.

In the United States, there are no federal laws dealing with the disposal of scrap tires, and state requirements vary considerably. According to the Rubber Manufacturers Association, New York state recently enacted legislation to clean up 40 to 50 million stockpiled scrap tires, the largest inventory outside Texas. The new law creates a dedicated fund to collect scrap tires and to promote their use in recycled products such as playground coverings. In Illinois, legislators responded to the Asian tiger mosquito problem by passing the Illinois Waste Tire Act. The law calls for scrap tires to be collected from dumps for recycling and encourages the building of waste tire processing facilities and the development of new recycling technologies.

Those efforts are a good start, but much more needs to be done. Not only should national governments address this problem, but organizations such as the World Bank should help promote recycling projects and new uses for scrap tires worldwide.

Uneven defenses

With so many possible avenues of transmission, the inadvertent trade of viruses will continue to plague nations as they struggle to keep their populations healthy. One government’s success or failure will be highly dependent on the public health capabilities of its trading partners and neighbors. Unfortunately, the capabilities of different governments are grossly uneven. The frailty of a country’s public health policies becomes glaringly obvious when they are tested by novel diseases such as SARS. But inadequacies are also reflected in the way many nations deal with common, easily preventable diseases.

In the instance of SARS, China provided the classic example of how not to respond to an epidemic. It suppressed information for four months and then hindered investigations of the outbreak. And in contrast to Vietnam, Canada, Singapore, and Taiwan, which quarantined thousands of people in their homes, Hong Kong officials strongly resisted such measures. Those mistakes dealt a blow to China’s credibility and economic growth, and they took a heavy toll in human lives: By June 2003, according to WHO, the disease had struck 29 countries, afflicting 8,000 people and killing 800.

In response to the SARS epidemic, WHO’s 192 member nations have granted the agency broad new powers for responding to the next crisis. Although WHO is required to collaborate with the affected country, it can now apply pressure to send in its own investigators to conduct epidemiologic studies. The agency can also collect data from nonofficial sources, and now has the power to issue global health alerts, something it did for SARS in March 2003 without explicit authority. Although this is an important development in the arena of international health, it still does not address the vast differences of capabilities and competencies in individual nations’ responses to health threats.

A good yardstick of such capabilities is the way in which a country manages routine vaccine-preventable diseases: If it cannot handle the easy diseases, it stands little chance of handling the difficult, unexpected, exotic diseases, including those that might arise from bioterrorism. A particularly useful viral disease to study in that regard is measles, because it is highly contagious yet easily controlled. Measles can cause complications such as pneumonia, encephalitis (brain infection), seizures, and subacute sclerosing panencephalitis, a rare degenerative condition of the brain. Although in the United States measles is fatal in only 1 or 2 out of 1,000 reported cases, it can have a death rate as high as 25 percent in malnourished children.

In the age of emerging viruses, international public health laws need to be more explicit on issues such as vaccination requirements and the consumption of wild and exotic animals.

The measles vaccine is cheap (about 26 cents), readily available, and highly effective in controlling outbreaks. Nevertheless, measles kills roughly a million people per year worldwide. Not all of those deaths are in developing countries, either. Between 1989 and 1991, the United States witnessed more than 55,000 cases of measles, some 120 of them fatal. Aggressive U.S. vaccination programs since 1993–including the requirement in 49 states that children receive two measles vaccinations before entering kindergarten–have drastically reduced the number of cases. Indeed, epidemiologic studies indicate that none of the outbreaks of measles after 1993 were from strains endemic to the United States–all were imported. In other words, U.S. domestic strains have been stamped out. By dividing the virus into eight distinct genetic groups based on geographic distribution, researchers found that countries as different as Japan and Germany have served as the source of at least five separate U.S. measles outbreaks from 1994 to 1997.

In Japan, an estimated 100,000 to 200,000 cases of measles occur each year, killing 50 to 100 people. Only about 80 percent of the population have been vaccinated: less than WHO’s international goal of 90 percent. That is because Japan revised its vaccination law in 1994 in response to some alleged cases of meningitis in children who were vaccinated for measles, mumps, and rubella between 1989 and 1993. Under the revised law, childhood vaccinations against measles are recommended but not mandatory. Not only does Japan now have a high incidence of measles, but unvaccinated Japanese travelers have been suspected of causing outbreaks elsewhere. In New York City, for example, a measles outbreak in July 2001 was traced to a group of Japanese students who were visiting a university. Similarly, a large outbreak that struck Anchorage, Alaska, in 1998 was caused by viruses closely related to known Japanese strains. It began four weeks after a boy visiting from Japan came down with the disease.

Germany provides a unique perspective on the impact of public health measures on measles because of its history of reunification. Before reunification, East Germany was close to eliminating the disease, thanks to mandatory vaccination, excellent surveillance, and successful vaccination strategies. In contrast, West Germany, with none of those defenses in place, had a significant measles burden.

Under reunification, the lax standards of the former West Germany have prevailed nationwide. Despite the passage of the Communicable Diseases Law Reform Act of 2001, which mandated national surveillance and required children to be vaccinated before entering school, Germany’s vaccination coverage remains low. Coverage is only 22 percent at 15 months, 77 percent at 24 months, and 87 percent at 36 months. Even upon school entry, fewer than 90 percent had received the first dose of the vaccine. Not surprisingly, Germany has experienced many measles outbreaks since reunification.

Germany’s poor performance illustrates how difficult it will be for WHO to achieve the goal of eliminating measles from Europe by 2007. Although WHO recommends that children receive the measles vaccine at 9 months of age and a second dose between the ages of 4 to 6 before the child enters school, the agency does not have the authority to mandate public health requirements.

In the absence of such authority, the world’s defenses against communicable diseases remain a patchwork of unequal and often ineffective national policies. Countries with weaker laws and public health practices have a greater burden of disease and are more likely to export those diseases to other nations. Not even the more developed regions have adequately dealt with this problem. The European Union has managed to institute a single market and currency, yet it has not established laws to ensure a uniform public health system. The relatively strong health policies of Finland and Sweden coexist alongside the weaker policies of Germany and Italy. As long as such disparities in public health laws, surveillance, and disease control strategies continue, so will the inadvertent trade of viral diseases.

Global public health standards

What is needed is the establishment of stringent public health laws and standards that all nations must meet. As the means of transporting viruses around the planet continue to multiply, all countries will have to be prepared to stem the spread of new and exotic diseases such as HIV, dengue fever, and SARS. But they can do that only if they are capable of containing common diseases such as measles, influenza, and hepatitis. Countries should have to demonstrate some minimum level of competency in dealing with those ordinary diseases as a prerequisite to full participation in global trade and travel.

Accomplishing that goal would not be easy. Some countries, such as those of the former Soviet Union, are struggling to develop functional bureaucracies and legal systems. Others are plagued by political or economic instability. Even the United States might prove to be a hard sell. An indication of that is the resistance that has greeted efforts to implement model public health laws across all 50 states: laws designed to improve readiness to respond to bioterrorism.

Virtually every United Nations member nation, regardless of the condition of its public health apparatus, is a member of WHO. Although membership requires nations to uphold the agency’s constitution, which is part of international law, many of its laws are vague; they do not specify how countries should ensure public health. In the age of emerging viruses, international public health laws need to be more explicit on issues such as vaccination requirements and the consumption of wild and exotic animals.

Strong economic incentives might be required to convince political and economic leaders that public health is important and that their nations’ trading partners and neighbors are serious about the issue. WTO membership could be contingent on the requirement that countries meet global public health laws and standards. Countries would have to ensure that their exported goods are not contaminated. They would have to certify, for example, that used tires do not contain mosquito eggs or larvae. Countries would also have to eliminate those enormous stockpiles of used tires.

The stakes are high. The social, political, and financial consequences of future emerging viral outbreaks could be even worse than those of the most recent viral scourges. WHO and the WTO have worked together since 1995 to promote international food safety standards. This type of collaboration could help ensure the competency of public health systems across nations and curtail the economic and cultural activities most likely to promote the emergence and spread of disease.

Establishing a Bureau of Environmental Statistics

In 2003, Rep. Doug Ose (R-Calif.) proposed the Department of Environmental Protection Act, which would elevate the Environmental Protection Agency (EPA) to a cabinet department and create within it a Bureau of Environmental Statistics (BES). Although cabinet status for the EPA may have symbolic or organizational advantages, the creation of a BES could prove to be the most meaningful portion of the bill, as well as an important development for future environmental policymaking.

Noting the weakness of available data describing the environment, in comparison to data available to other agencies in their own respective purviews, the bill would authorize the proposed BES to collect, compile, analyze, and publish “a comprehensive set of environmental quality and related public health, economic, and statistical data for determining environmental quality . . . including assessing ambient conditions and trends.”

Why do we need another bureaucratic agency collecting statistics? The overarching reason is that we simply do not have an adequate understanding of the state of our environment. In many cases, the network of monitors measuring environmental quality is insufficient in geographic scope. For example, in many cases our knowledge of national air quality is based on a few monitors per state; our knowledge of water quality is even weaker.

Of course, this easy answer begs the further question of why we need a better understanding of the state of our environment. There are at least three returns that will result from collecting environmental data, each of which could pay greater dividends with reorganization and investment.

The first return from a BES would be to improve our monitoring and enforcement of environmental standards. Environmental standards in the United States generally fall into one of three types: standards for production technology or other behavior, emissions, and ambient concentrations. Technology standards prescribe that a specific technology or technique be used in the production process (for example, a specific type of equipment at a factory or plowing practice for farmers). Emissions standards specify a maximum rate of pollution emissions from a source, per unit of time or output. When pollution emissions are concentrated at a discrete number of sources (power plants and large factories), both types of standards are fairly straightforward to enforce through inspections or monitoring. Ambient standards pose greater challenges. Ambient standards require that pollution, after dispersing from its source through the air and water, not surpass some specific level. For example, eight-hour average concentrations of ozone cannot exceed 0.08 parts per million on more than three occasions per year at any location. If concentrations do exceed this standard, they trigger technology-based standards and other rules for the region. In the case of air quality, counties and regions that fail to meet the ambient standards risk the loss of federal highway dollars, bans on industrial expansion, and mandatory installation of expensive pollution-abatement equipment.

The integrity of such a system depends on the network of monitors measuring ambient quality. Currently, many expensive environmental regulations, with serious consequences for businesses and local economies, are based on a limited monitoring network. One may well wonder if some areas that are above the ambient thresholds have escaped detection. At the same time, there is some evidence that other areas continue to be designated as noncompliant even when they seem to meet the ambient standards. Recent research by Michael Greenstone of the University of Chicago has shown that many counties remain in official noncompliance for sulfur dioxide standards, even though readings from the available monitors have shown compliance for many years. The catch-22 is that a county must prove compliance throughout its jurisdiction even if the monitoring network is inadequate to shed light on all areas. Both kinds of errors undermine the fairness and effectiveness of the system and could be reduced with a more extensive monitoring network.

The second return from more data collection would be to satisfy our natural desire to understand broad trends that affect our society and its welfare. It is because of such a desire that we first began to collect many of our national economic statistics, including the familiar measures of gross domestic product (GDP) and inflation. Yet ever since the origination of the GDP concept in A. C. Pigou’s seminal Wealth and Welfare (1912), it has been acknowledged that GDP is only a proxy and is not a perfect measure of welfare, because it omits many important components that do not pass through markets. Even then, the environment was acknowledged to be one of the important omissions. Since that time, we have invested enormous resources in improving measures of the market components of national well-being, but we have not proportionately broadened that effort to other components such as the environment.

As its tools and data develop, the BES could eventually arrange such data about the state of our environmental well-being into a three-level hierarchy. The first level would essentially be the raw data that we currently gather (such as ozone levels in the troposphere and dissolved oxygen levels in lakes), albeit with some reforms. Reforms are required because, to date, most ambient monitoring has been motivated by the first objective described: the assessment of ambient standards. Accordingly, we tend to focus on areas that are known or suspected problem areas. For example, 10 states account for more than half of our ozone monitors. The Los Angeles and Houston metropolitan areas alone, with some of the worst air quality in the nation, have about 45 and 35 ozone monitors respectively–more than most states.

This approach is perfectly reasonable from the standpoint of enforcement, but not for surveying the overall state of things. From that perspective, the data we have is biased: Precisely because it is gathered at the problem spots, it is not representative of other areas. Between two monitored problem spots (two cities, say), pollution concentrations are presumably much lower, but we cannot tell without a monitor, and simply averaging the concentrations from the two monitors would not be correct. From the standpoint of assessing the real state of things, we need something like a random spatially distributed sample of observations. A smart sample would still focus on cities and other areas where, because of topography and economic activity, pollution concentrations vary more widely across short distances. But it would still look very different from our current sample. With a wider network, a BES with a broad statistical mandate could develop a sampling scheme that balances both the enforcement and the surveying objectives.

Gathering such basic data would be only the first step in improving our understanding of the state of our environment. Eventually, to obtain a complete picture, we would want to move to the second level in the hierarchy of our understanding: aggregate indices of the environment or of environmental systems. What is the state of air quality, taken as a whole? The state of riparian or forest ecosystems? Although meaningful, such questions raise still more. How do we define the limits of an ecological system in the dimensions of space, media, and species? What are the measurable indicators of its health? How can these various measures be aggregated into a single index? Mirroring the entrepreneurial beginning of our economic indices, a number of groups are pursuing these questions. A leading example is the H. John Heinz III Center for Science, Economics and the Environment, which produced the 2002 report, The State of the Nation’s Ecosystems. A BES that improved and centralized our collection of basic data would undoubtedly contribute to these efforts. But it would also be in a position to advance them to the next step of an official government index.

The playbook of strategies with which we might attack environmental problems is limited by lack of information.

In the long run, with an additional congressional mandate, the bureau could aspire to a third level of assessment: collaborating with the Bureau of Economic Analysis, which produces the GDP, to produce a “green GDP.” An idea that is actively being pursued by European countries, a green GDP would advance Pigou’s original vision by more closely approximating our overall welfare. It would net out reductions in our natural resources and “natural capital assets” in the same way that net GDP currently accounts for depreciation in the stock of human-made capital. It could also account for services provided by the environment–protection of human health, enhancement of outdoor recreation, flood protection, and so forth–in the same way that GDP accounts for the services provided by market goods. The services would be valued at people’s marginal willingness to pay, analogous to the price paid for market goods, using the type of data routinely collected today for benefit-cost analyses.

Designing better policies

The third type of return from more data collection would be in our ability to design better public policies for the environment. Currently, our ability to design policies that properly balance environmental quality with other objectives, or that attain environmental objectives in the most efficient and effective manner, is hampered by inadequate information. This knowledge gap is more meaningful than a mere shortage of beans for bean counters. It manifests itself in every stage of policy design and evaluation.

Looking in the rearview mirror, we do not know in many cases whether existing policies have been effective, which makes it difficult to assess what remains to be done. Looking forward, we often find that the playbook of strategies with which we might attack environmental problems is limited by lack of information. Sometimes, the lack of information creates practical problems for implementing and enforcing a strategy. For example, recent thinking about the control of water pollution has focused on the total maximum daily load (TMDL) of pollution, from all sources, entering water bodies that violate ambient standards. Theoretically, this is a sound approach for two reasons. First, it is firmly grounded in the reality of water pollution problems, which, with large point sources well regulated, increasingly have their source in disparate urban and agricultural runoff. Second, it can increase the flexibility and efficiency of pollution control by considering all the sources of pollution and concentrating on the most cost-effective targets. But it is data-intensive. Like any ambient standard, it would require a sufficient monitoring network. But it would also require an inventory of pollution sources (point sources, roads, farms, and other land uses), their levels of pollution, and models of the transport of their pollution to the water body. It is difficult to imagine pulling off such a policy without a great deal of investment in data collection and analysis.

In other cases, the lack of information makes it difficult to anticipate the effects of a policy, creating political uncertainties. For example, the 1990 Clean Air Act Amendments ushered in the large-scale use of markets to limit pollution: Total sulfur dioxide emissions from power plants are capped at a certain level, and utilities can trade permits representing the right to pollute under this cap. This system has proven to be a highly cost-effective way to reduce air pollution nationally, but one outstanding question is whether it might allow pollution to concentrate in particular areas. Without a more thorough monitoring network, it is impossible to know whether these so-called hot spots are a serious problem. The consequence is hesitation in further use of this potentially effective policy instrument.

Although collecting and analyzing such physical data would probably need to be the first priority of a BES, better economic and social data related to the environment would also improve our ability to design and evaluate policies. The last comprehensive survey of expenditures for environmental abatement and mitigation was in 1990. Without such data, we cannot have a good sense of the aggregate cost of our environmental policies.

A centralized database of estimates of the benefits of various environmental improvements would also be useful. New research, much of it sponsored by the EPA, continues to estimate the “services” provided by the environment to people, in the form of protected health, enhanced recreational opportunities, flood protection, and so forth. Much of it also estimates the monetary value of such services for use in benefit-cost analysis. Other researchers and government agencies routinely refer to this research for estimates that can help gauge the impact of new policies, but in doing so they must comb through the vast and disparate literature each time. A centralized database or library of the research would prevent much duplication of effort. It would also be a necessary source of data for any efforts to compute a green GDP.

There are other reasons to support the creation of a BES. A BES would facilitate one-source shopping for members of Congress, agency administrators, and the public, who currently must navigate a maze of agencies to construct a picture of the nation’s environment. In addition, an independent BES might lend more credibility–a sense of objectivity–to our environmental statistics, giving the public a commonly accepted set of facts from which to debate policy, much as the Bureau of Labor Statistics (BLS) and the Bureau of Economic Analysis (BEA) have done for economic statistics.

Indeed, our previous experience with these economic agencies provides important lessons on which we can build when establishing a BES. First, we must admit that statistics can be politically controversial. During World War II, for instance, industrial wages were linked to changes in the Consumer Price Index (CPI). At the same time, the CPI began to move out of synch with the popular perception of price changes, recording much lower inflation rates than people experienced in their everyday lives, largely because it missed the deterioration of quality in the goods that were selling at modestly increasing prices: Eggs were smaller, housing rental payments no longer included maintenance, tires wore out sooner, and so forth. The result was political uproar, with protests on the home front from organized labor. In the end, a lengthy review process, with representatives from labor, industry, government, and academic economists, resolved the issue.

The importance of regular reviews

Although environmental statistics will probably never hit people’s pocketbooks as directly as did the CPI, they can get caught in the crossfire between business and environmental groups. Building in a regular external review process would help keep the peace during such moments. Crises aside, external reviews would ensure that a BES is balanced and objective, in both fact and perception, and help improve its quality over time.

Indeed, the regular external reviews of the CPI have raised points that would be of value to a future BES. Some are academic questions about sampling and analyzing data that could be addressed within the bureau. Others, such as the need for data sharing, may require congressional action from the beginning. In our economic statistics, there is substantial overlap between information collected for the U.S. Census (housed within the Department of Commerce), the unemployment statistics and the CPI (collected by the BLS), and the GDP (collected by the BEA). To address this concern, Congress recently passed the Confidential Information Protection and Statistical Efficiency Act, which allows the three agencies to share data and even coordinate their data collection.

Similar data-sharing issues would arise regarding environmental statistics. Currently, environmental statistics are collected not only by the EPA but also by the Departments of Agriculture, Interior, Energy, and Defense, and in some cases by multiple bureaus and agencies within these departments. Even some of the economic statistics collected by the Census Bureau, BEA, and BLS would overlap in a complete picture of environmental statistics. Coordination across these agencies–and in some cases consolidating tasks into the new bureau–would be essential for generating the best product without duplication of effort.

An additional insight gained from looking back on our experience is that economic statistics now play a much larger role in our economy and in economic planning than originally envisioned. Most generally, they have been used as a scorecard for the nation’s well-being, a basis for leaders to set broad policy priorities (to stop inflation or spur growth), and a basis for the public to assess its leaders. At a more detailed level, they now fit routinely into the Federal Reserve’s fine-tuning of the economy. Finally, through indexing of wages and pensions, tax brackets, and so on, the CPI automatically adjusts many of the levers in the economic machine.

A centralized database of estimates of the benefits of various environmental improvements would also be useful.

One could imagine environmental statistics playing each of these roles. First, despite their current weaknesses, environmental statistics already help us keep score of our domestic welfare. Second, they increasingly could be used to adjust policies. Initially, they may serve as early warning signals for problems approaching on the horizon (or all-clear signals for problems overcome). Later, as the data develop and policies evolve to take advantage of them, they may even be used in fine-tuning: On theoretical drawing boards, economists have already designed mechanisms that, based on regularly collected data, would dynamically adjust caps for pollution levels or annual fish catches. The only thing missing is the data with which to make such mechanisms possible.

A final lesson learned is that high-quality statistics cannot be collected on the cheap. We currently spend a combined $722 million annually on data collection for the U.S. Census (excluding special expenditures for the decennial census), the BLS, and the BEA, and more than $4 billion each year for statistical collection and analysis throughout the federal agencies. During the past three years, these budgets have increased at annual rates of approximately 6.5 percent and 9.7 percent, respectively. Nevertheless, these efforts are widely considered to be well worth the cost.

By comparison, the current budget of $168 million for environmental statistics seems small. Consider that in 1987 (the last year for which comprehensive data are available) the annual private cost of pollution control was estimated to be $135 billion, and that government spends $500 million a year for environmental enforcement. With approximately 2 percent of our GDP at stake in these expenditures, and the welfare of many people, a top-notch set of environmental statistics seems long overdue.

A Sustainable Rationale for Human Spaceflight

The August 2003 report of the Columbia Accident Investigation Board (CAIB) noted that, “all members of the Board agree that America’s future space efforts must include human presence in Earth orbit, and eventually beyond.” As justification for this point of view, the CAIB offered only President George W. Bush’s remarks on the day of the Columbia accident: “Mankind is led into the darkness beyond our world by the inspiration of discovery and the longing to understand. Our journey into space will go on.”

In parallel, the CAIB was critical of “the lack, over the past three decades, of any national mandate providing NASA (National Aeronautics and Space Administration) a compelling mission requiring human presence in space.” In the absence of such a mandate, “NASA has had to participate in the give and take of the normal political process in order to obtain the resources needed to carry out its programs.” In this give and take, “NASA has usually failed to receive budget support consistent with its ambitions. The result . . . is an organization straining to do too much with too little.”

This criticism, when combined with the CAIB’s endorsement of human spaceflight and with the soul searching that has characterized the nation’s reaction to the Columbia accident, has provided an opportunity to set the U.S. human spaceflight program on a productive long-term path. In the fall of 2003, a series of congressional hearings was held on the future of the human spaceflight program. Most senators and representatives who spoke at those hearings appear ready to provide NASA with the additional resources needed to set things right and called on the president to put forth a new vision for the future of human spaceflight.

At the other end of Pennsylvania Avenue, the White House recognized the need for an authoritative response to the post-accident situation. Beginning in the late summer of 2003, a high-level, closely held, space policy review was initiated. Its purpose was to provide the president with recommendations for a new vision of the U.S. future in space. As that review was nearing its conclusion in late 2003, President Bush and his top advisors were trying to decide whether they indeed wanted to propose the kind of national commitment to a guiding mandate for future human spaceflight that has been lacking since President Kennedy in May 1961 asked the country to commit to landing Americans on the moon “before this decade is out.” Presuming that the president does articulate such a vision–and the expectation that he will has been carefully nurtured–it will then be up to the public, speaking through their elected representatives in Congress, to decide whether to accept it and to offer long-term support.

Whatever the specifics of the proposed new path in space, it will have to rest on a convincing argument of why it is in the nation’s interest to make and sustain such an expensive commitment, particularly one that inevitably involves risking the lives of more astronauts. Rhetoric about “the inspiration of discovery and the longing to understand” is unlikely to be enough; those reasons have been offered publicly for years and have led to 30 years of unfocused activity. What other possible reasons are there for a commitment to human spaceflight? Can a compelling case be made?

Lessons from history

Most public justifications for accepting the costs and risks of putting humans in orbit and then sending them away from Earth have stressed motivations such as delivering scientific payoffs, generating economic benefits, developing new technology, motivating students to study science and engineering, and trumpeting the frontier character of the U.S. society. No doubt space exploration does provide these benefits, but even combined, they have added up to a less-than-decisive argument for a sustained commitment to the exploratory enterprise. The United States has committed to keeping humans in space, but since 1972 they have been circling the planet in low-Earth orbit, not exploring the solar system. The principal rationales that have supported the U.S. human spaceflight effort to date have seldom been publicly articulated. And those rationales were developed in the context of the U.S.-Soviet Cold War and may no longer be relevant.

Kennedy’s proposal to send Americans to the moon was not motivated by a belief in the long-term importance of space exploration. Rather, it was a politically driven response to the situation in the first months of his presidency, as the Soviet Union gathered international acclaim by putting the first human into orbit while the new administration appeared weak as it wavered in its support of an invasion of Cuba. To counter the rapidly falling prestige of the United States and his presidency, Kennedy asked his advisors to find him a “space program which promises dramatic results in which we could win.” The response came back a few weeks later. Kennedy was told that “dramatic achievements in space . . . symbolize the technological power and organizing capacity of a nation.” For that reason, “the nation needs to make a positive decision to pursue space projects aimed at national prestige,” because such projects were “part of the battle along the fluid front of the cold war.” Putting astronauts into space was essential, it was argued, because “it is man, not merely machines, that captures the imagination of the world.”

In a perceptive 1964 study, political scientist Vernon Van Dyke identified Pride and Power (the title of his book) as the primary rationales for the major commitment of national resources made in the early years of the U.S. space program. His analysis can be applied not only to Kennedy’s decision to go to the Moon, but also to the two major policy decisions that have shaped the U.S. civilian space program since. Like the Apollo initiative, the 1972 decision to develop the space shuttle and the 1984 decision to build a space station were influenced more by considerations of national power and national pride than they were by other motivations. Although they lacked the drama of sending Americans to the Moon, they were essential to keeping human spaceflight a continuing U.S. undertaking.

During the summer of 1971, the future of human spaceflight was being debated within the Nixon administration. The staff of the Office of Management and Budget (OMB) had suggested that the final two lunar landing missions, Apollo 16 and 17, be cancelled and that NASA’s request to develop a reusable launch vehicle, the space shuttle, be denied. This struck one of the president’s long-time associates, Caspar Weinberger (then deputy director of OMB), as shortsighted. In a memorandum to the president, Weinberger argued that ending human spaceflight would confirm a belief “that our best years are behind us, that we are turning inward . . . and voluntarily starting to give up our superpower status, and our desire to maintain world superiority.” He added, “America should be able to afford something besides increased welfare.” Nixon replied: “I agree with Cap.” This reasoning carried the day: On January 5, 1972, Nixon announced his approval of shuttle development.

The next occasion for a major commitment to human spaceflight came after the shuttle was declared operational on July 4, 1982. NASA needed a new development program to keep its engineers fully engaged and campaigned hard in 1982 and 1983 to gain presidential endorsement of a space station as that program. In a climactic meeting on December 1, 1983, NASA presented its case for approving station development to President Reagan and other top-level officials. After listing the many functions such a facility could carry out, NASA’s final argument was that it would be “a highly visible symbol of U.S. strength.” This argument was persuasive; in his January 25, 1984, State of the Union address, Reagan argued that “nowhere [other than space] do we so effectively demonstrate our technological leadership . . . . We can be proud to say: We are first; we are the best; and we are so because we’re free.” The president continued: “America has always been greatest when we dared to be great. We can reach for greatness again . . . Tonight, I am directing NASA to develop a permanently manned space station.”

Using NASA’s human spaceflight programs to demonstrate U.S. technological (and by implication, military) power was the underpinning motivation for the three critical decisions that have shaped those programs to date. All three decisions came in the context of the U.S.-U.S.S.R. strategic rivalry. The United States was not brandishing its power in the abstract; rather, in each case it was inviting comparison with its superpower rival.

It is unthinkable that the United States would abandon human spaceflight just as China, its putative 21st-century contender for global influence, begins its own program.

With the end of the Cold War and the collapse of the Soviet Union, space achievement has lost its potency as a symbol of U.S. power. This country no longer has to demonstrate to the world its superiority vis-à-vis a single rival for influence, and another rationale is now needed to underpin any new and major space commitment. To date, the search for a new rationale has not been successful.

Failed initiative

Between President Kennedy’s 1961 call to action and today, there has been one failed attempt by a president to set a long-term direction for human spaceflight. That attempt came as the end of the Cold War was approaching. Its inability to capture political and public support provided convincing evidence that space achievement was no longer seen as an effective measure of U.S. power.

Fifteen years ago, on July 20, 1989, George H. W. Bush set out an expansive vision for the future of humans in space. On the 20th anniversary of the Apollo 11 lunar landing, he proposed that the United States accept a “long-range, continuing commitment” to human exploration of the solar system, starting with completing the space station, then a return to the Moon, “this time to stay,” and finally ” a journey into tomorrow, a journey to another planet: a manned mission to Mars.” That initiative died almost stillborn; neither the Democrat-controlled Congress nor NASA itself embraced it with any enthusiasm.

A major problem in 1989 was the lack of an alternative to demonstrating national power as a rationale for making the requested long-term commitment to exploration of space by humans. With the Cold War reaching its end, the president asked: “Why the Moon? Why Mars? Because it is humanity’s destiny to strive, to seek, to find. And because it is America’s destiny to lead.”

The lack of political support for what came to be known as the Space Exploration Initiative suggests that arguments for supporting space exploration as some sort of U.S. manifest destiny do not have the kind of appeal that would lead politicians to commit billions of dollars. The notion that it is important for the United States to be recognized as the leader in major cooperative space undertakings such as the International Space Station has had some degree of political resonance. However, U.S. space leadership, although often proclaimed, has yet to be earned. Even setting aside the significant effects of the Columbia accident on U.S. partners in the International Space Station, this country has a very mixed record of stability and sensitivity to others in that and other space undertakings. Space-faring countries around the world are now looking to other partners as they plan their future endeavors.

The lure of national pride

In his 1964 book, Van Dyke suggested that national pride was as much a motivator of the U.S. commitment to space as was the quest for national power. It may well be that space achievements, particularly those involving direct human presence, remain a potent source of national pride and that such pride is the underpinning reason why the U.S. public continues to support human spaceflight and would find a decision to end the U.S. human spaceflight program unacceptable. Certainly, space images–an astronaut on the Moon, a space shuttle launch–rank only below the American flag and the bald eagle as patriotic symbols. The self-image of the United States as a successful nation is threatened when we fail in our space efforts, as we have seen from the embarrassment of a misshaped mirror on the Hubble Space Telescope to the sense of collective loss when some of our best citizens die before our eyes in space shuttle accidents. Americans expect a successful program of human spaceflight as part of what the United States does as a nation. They are not overly concerned with the content or objectives of specific programs. But they are concerned that what is done seems worth doing and is done well. It is that sense of pride in space accomplishment that has been missing in recent years.

The CAIB report laid bare for the country to see both that there had been no overarching meaning to the U.S. human spaceflight space program in recent years and that the program that did exist was not being well executed. The most damning sentences in the report are those that suggest that neither NASA nor the nation’s leadership lived up to the bargain they had made with those who took the risk of flying on the shuttle “to operate and maintain the vehicle in the safest possible way.”

NASA is now working hard to ensure that never again will it be the subject of such a painful indictment. It cannot do this alone. The nation’s leaders must propose a set of activities for the future that have positive meaning and set worthy goals for humans in space as well as provide the resources needed to carry out those activities successfully and as safely as humanly possible.

If national pride is to be a fundamental rationale for putting people into space, what is obviously needed first is for spaceflight to continue. In 1971, as the debate over whether to approve the space shuttle reached its climax, NASA Administrator James Fletcher argued to the White House that “for the U.S. not to be in space, while others do have men in space, is unthinkable, and is a position which America cannot accept.” That admonition still seems valid. It is indeed unthinkable that the United States would abandon human spaceflight just as China, its putative 21st-century contender for global influence, begins its own program.

Beyond just being in space, what seems important to a continuing feeling of national pride is a sense that people are doing valuable things there. For government-sponsored activity, that is likely to require that people leave Earth orbit to explore and eventually utilize for research purposes other destinations in the solar system. Building a new and improved space station is unlikely to be a well-received next step after completing the International Space Station; other possible Earth-orbit goals, such as capturing solar energy as a source of electrical power on Earth, are too far in the future. Between 1988 and 1996, one goal of U.S. national space policy listed was “to extend human activity and presence beyond Earth orbit into the solar system.” No specific destinations were listed, nor were schedules set. If the president proposes and Congress approves the initial steps toward achieving this goal, an important step toward restoring a longer-term purpose for human spaceflight would be achieved.

A space exploration program that provides the promise of continued scientific payoffs, that serves as a vehicle for U.S. leadership in carrying out missions that have sparked the human imagination for millennia, that excites young people and attracts them toward technical education and careers, and that would serve as a source of renewed national pride in its accomplishment is something that American citizens appear willing to support. The challenge for the country’s leaders is to not only propose such a program as a legacy of the Columbia accident but also to demonstrate the political will to sustain it over coming decades.

New Challenges for U.S. Semiconductor Industry

The United States faces a growing threat to its leadership of the world semiconductor industry. A combination of market forces and foreign industrial policies is creating powerful incentives to shift new chip production offshore. If this trend continues, the U.S. lead in chip manufacturing, equipment, and design may well erode, with important and unpleasant consequences for U.S. productivity growth and, ultimately, the country’s economic and military security. To address this challenge, U.S. industry and the government need to cooperate to determine their response.

As challenges tied to the industry’s move toward ever-smaller dimensions have intensified, governments in Asia and Europe have moved vigorously to coordinate and fund research in both product and process technologies. The scale of these efforts is unprecedented. A recent U.S. National Research Council report, Securing the Future, identified 16 major government-sponsored initiatives at the national and regional [i.e., European Union (EU)] level, a number of them receiving more than $100 million annually in support. Some have been inspired by the success of SEMATECH, the formerly U.S.-only consortium of semiconductor device makers widely credited with helping to pull the U.S. industry out of its tailspin in the late 1980s. What is odd is that although governments abroad have embraced consortia modeled on SEMATECH as a means of supporting national and regional industries, today the United States has no comparable publicly supported effort, even as the technological hurdles faced by this enabling industry continue to grow.

Compounding the heightened competition in research has been the dramatic increase in the cost of new fabrication facilities (fabs) that has accompanied growth in the sophistication and scale of manufacturing. The evolving economics of production have also spawned a new business model, the foundry, which may test even the most agile of the vertically integrated U.S. chip makers. Turning out integrated circuits under contract for firms that work exclusively in device design, including leading U.S. firms, foundries provide relatively low-cost products for “fabless” companies that need high-performance fabrication but are unable or unwilling to invest the $2 billion-plus it now takes to build a new plant.

Until recently, most foundries were operating or planned for construction in Taiwan, where the government has aided the industry with a variety of measures that include generous tax breaks. Just in the past few years, however, China has stepped forward, matching Taiwan’s incentives and trumping them with a rebate of most of the value-added tax (VAT) on chips designed or manufactured in China. In a major shift, the Chinese semiconductor industry is now drawing massively on Taiwanese capital and skilled management, as well as attracting investments from the United States and elsewhere.

Key among the policy tools put to work by China, in cooperation with regional authorities in Shanghai and elsewhere, is highly favorable tax treatment for plants, equipment, and skilled personnel. Yet it is the rebate of the VAT for Chinese products that seems to be having the strongest effect on this capital-intensive industry. This measure may ultimately be declared in contravention of World Trade Organization rules, but for now construction of chip-fabrication capacity in China is projected to boom. Of paramount importance for the future of the U.S. semiconductor industry is the extent to which Chinese government policy will determine the location of the industry, and its supply base, before there is a U.S. response.

The United States needs to take immediate bilateral and multilateral action to end China’s discriminatory VAT “tariff.”

Why does the industry matter?

Some may ask whether it matters if the U.S. economy has a robust and growing semiconductor industry. The short answer is that it does. The semiconductor’s powerful impact on productivity growth in the U.S. economy, through improved information technologies and the industry’s own productivity gains, is now generally acknowledged. Drawing on work by economists such as Dale W. Jorgenson of Harvard University and Kevin J. Stiroh of the Conference Board, the Council of Economic Advisers’ 2001 Economic Report of the President described this impact as follows:

  • “The information technology sector itself has provided a direct boost to productivity growth”
  • “The spread of information technology throughout the economy has been a major factor in the acceleration of productivity through capital deepening” and
  • “Outside the information technology sector, organizational innovations and better ways of applying information technology are boosting the productivity of skilled workers.”

Put more succinctly, the semiconductor industry is U.S. manufacturing’s star performer. On the strength of a 17 percent annual growth rate, its output climbed from 1.5 percent of manufacturing gross domestic product in 1987 to 6.5 percent in 2000. In 1999, when it posted $102 billion in sales, it accounted for not only half the world market in its product but also for over 5 percent of manufacturing value-added in the U.S. economy, making it the manufacturing sector’s leader. It boasted 284,000 employees as of August 2001 and paid them an average hourly wage 50 percent higher in real terms than it had 30 years before–a remarkable achievement in light of the overall 6 percent real decline in manufacturing wages over the same period. And it provides the core of the $425 billion U.S. electronics industry.

Yet impressive as these figures may be, they represent only a small part of the semiconductor’s footprint, which dwarfs industry-specific trade, employment, and revenue figures. Indeed, the end of a two-decade slowdown in U.S. productivity growth that took hold in the early 1970s and that coincided with a significant erosion of the country’s industrial power can be traced to a sudden speedup in the rate of decline of semiconductor and computer prices. As Jorgenson has documented, after having been steady at 15 percent, the annual rate of these prices’ decline vaulted to 28 percent in the mid-1990s. U.S. labor productivity in the period 1995-1998 increased by 2.4 percent per year–a full percentage point higher than the average rate for the preceding five years–as investment in computer technology exploded and its contribution to growth rose more than fivefold. These trends have continued through the recent recession, with productivity gains of 9.4 percent on an annualized basis estimated for the third quarter of 2003.

Given semiconductor manufacturing’s prominent role in the U.S. economy as a source of value-added production, high-wage jobs, productivity gains, and wage growth, keeping the industry is clearly in the nation’s interest. And although U.S. clients of Asian foundries believe they can retain in-house chip design expertise in the absence of manufacturing, their negotiating position may shift as capacity tightens, and it is in any case dependent on effective intellectual property protection, which is currently problematic in China. More fundamentally, semiconductor manufacturing is a learning-by-doing industry. Having fabrication facilities in close proximity to R&D facilities is important for researchers and manufacturers alike.

Producing state-of-the-art integrated circuits requires a wide variety of interdependent skills, and these skills need to be honed constantly if they are to stay on the cutting edge. Until now, U.S. chip fabrication plants have anchored specialized industrial clusters that often include R&D labs and the facilities of semiconductor equipment and materials makers, which work closely with device manufacturers on technology development. Often with the aid of government support, such clusters have emerged in cities as diverse as Austin, Phoenix, Portland, Hsinchu, and recently, Dresden. They serve as magnets for human capital.

Source: National Science Foundation

Sharply rising capital costs for 300-millimeter fabs, concomitant increases in the capacity of these fabs, and slower market growth will tend to limit the number of fabs constructed. Further, as the number of first-line fabs decreases and as competition grows for scarce top-level human resources, risks to the health and vibrancy of U.S. clusters grow as well. The redirection or dissolution of semiconductor production clusters in the United States would likely result in the loss of on-the-job training and shifts in career choices for engineers while reducing activity in associated industries, which are also high-value-added and R&D-intensive. This, in turn, could lead to the loss of learning skills needed for promising new sectors such as solid-state lighting and nanotechnologies, while at the same time calling into question the country’s ability to retain the current level of R&D capabilities.

U.S. labor pool, R&D waning

The pool of skilled labor available to the U.S. semiconductor industry already appears to be shrinking. The number of bachelor’s degrees in engineering granted annually by U.S. universities has been essentially static over the past decade and comes to only one-sixth of the combined total granted in China, India, Japan, South Korea, and Taiwan each year. Many foreign-born engineers are benefiting from first-class education at U.S. schools. As attracting skilled labor becomes an integral part of international industrial competition, these engineers are being offered significant inducements to return home, as are knowledge-rich engineers who now work in U.S. industry. China and Taiwan alike have deployed tax policy to enhance the attractiveness of working in their national high-tech clusters: Both nations effectively exempt stock options distributed as employee compensation and provide numerous other incentives to enhance the standard of living of engineers and managers who opt to work in what countries such as China see as a strategic industry.

The military significance of microelectronics as the decisive advantage for the U.S. warfighter has increased exponentially since the 1980s.

At the same time, although U.S. private R&D outlays have been growing at a robust pace overall, federal spending has not kept up with private-sector investment (see Figure 1). This is worrisome, since industry now finances far less long-term research than it did in the heyday of the large central corporate laboratories. And it is specifically in the electronics sector, where U.S. government and industry gave the world an example of constructive public-private cooperation in the formation of SEMATECH in the late 1980s, that U.S. policies have been running contrary to global trends. For example, some reports suggest that the annual support from the Defense Advanced Research Projects Agency, a major source of U.S. funding for long-term R&D in microelectronics, fell from about $350 million to about $55 million–a drop of around 85 percent–during the decade of the 1990s, just as these same industries were driving the growth and productivity of the U.S. economy.

Initiative shifting abroad

Contrasting with U.S. reticence to provide R&D support are the trends in Europe and East Asia, where national and regional programs are rapidly expanding in size and number as governments signal the importance they attach to their semiconductor industries and seek to address their growing technical challenges with substantial levels of direct and indirect funding. A prominent example, Belgium’s IMEC, draws annual funding from the Flanders regional government of $45 million. Conducting research under contract for European governments and for companies both inside and outside the EU, IMEC has become an international center of excellence for semiconductor manufacturing research.

The EU itself, hard on the heels of its four-year Micro-Electronics Development for European Applications (MEDEA) program (which drew one-third of its 2 billion-euro budget from government sources) has pledged twice as much for MEDEA-Plus, an eight-year program ending in 2009. Germany and France are contributing to research programs at the national level and seeking chip-related foreign investment. Advanced Micro Devices’ recent decision to build a manufacturing plant in Dresden indicates that they are having some success. And Japan’s commitment to a renewal of its industry has spawned at least seven large semiconductor research programs, four of which started in 2001.

Source: Strategic Marketing Associates (September 2003)

Most recently, however, attention has shifted from research funding levels to investment in new fabs, and all eyes have turned rather abruptly toward China, where capital investment is growing rapidly (see Figure 2). In September 2002, four fabs were operating on the mainland, one was under construction, and another 10 were planned; Taiwanese interests were reported to be involved in six of the 10 that were in the planning stages and to have part ownership of the plant being built. These projects had already drawn resources earlier intended for investment in Taiwan itself, where, as of mid-2000, a remarkable 30 new fabs were seen going into production by 2010. Instead, Taiwanese semiconductor enterprises now have their sights set on the construction of no fewer than 19 new fabs on the mainland by the end of the decade, all of them foundries, whereas 10 fewer are to be built in Taiwan. This compares with four to five fabs proposed or underway in the United States. Meanwhile, South Korea has also begun making substantial investments in fabs. This de facto fusion of China’s and Taiwan’s industries is an important development and one that requires a vigorous and constructive U.S. policy response.

The stakes are substantial and are increasingly recognized in the technology community. For example, the President’s Council of Advisors on Science and Technology (PCAST) is finishing the first phase of a study reviewing the health of U.S. high-tech manufacturing. Noting the success of foreign governments in creating an attractive environment for the manufacture of electronics and semiconductors, George Scalise, the chair of the PCAST subcommittee on Information Technology Manufacturing and Competitiveness, points out that “U.S. high-tech leadership is not guaranteed.” He adds that “If we lose leadership and if we don’t have that as a driving force in our economy, then it is going to have an impact on the standards of living here. That is a reality.”

The challenge of speed

The impact of foreign government policies is indeed real and effective. Although the prospect of selling into China’s domestic market provides the long-term impetus to site manufacturing capacity there, it is the immediate impact of tax incentives, particularly the VAT rebate, that is so rapidly redrawing the map for semiconductor production. Perhaps surprisingly, there is not much of a gap between Taiwan and China in investment and operating costs. And it is important to remember that the capital-intensive nature of the industry makes labor–about 7 percent in the industry cost structure–a relatively unimportant factor.

This policy, which amounts to imposing a tariff on foreign-made wares, will be effective; at least until it is condemned by the World Trade Organization. But as long as it holds, China’s pull on new production will be powerful. By creating a “price umbrella” for semiconductors produced in China, the policy gives domestic manufacturers the option of either raising profits by increasing prices or undercutting imports to ensure greater capacity utilization and consequently lower unit costs.

The policy also influences the behavior of companies that buy semiconductors for incorporation into products made in China. “Our U.S. customers, who have either joint ventures or wholly owned subsidiaries in China, have indicated to us that sooner or later, we have to be there, because it costs them a lot in taxes to import our goods,” says a top executive of the Taiwan Semiconductor Manufacturing Corporation. Taiwan’s government is already putting into place measures designed to keep R&D, design, finance, logistics, and marketing–so-called “headquarters” functions–in Taiwan once manufacturing itself has crossed the strait to the mainland.

These policies have created a dilemma for some U.S. managers. Intel cofounder and chairman Andrew S. Grove was recently described by the Washington Post as “torn between his responsibility to shareholders to cut costs and improve profits, and to U.S. workers who helped build the nation’s technology industry.” Decrying the “policy gap” in Washington, Grove predicted that without government help in deciding “the proper balance between the two,” companies would come down on the side of their immediate obligations to shareholders. Their dilemma underscores a key point. Individual firms, particularly those that believe they must be present in the rapidly growing Chinese market, are poorly positioned to counterbalance the strategic industrial policies of a nation, especially one the size of China.

It is important to emphasize that, whatever is done, a significant portion of the semiconductor industry will inevitably locate in China. The current global distribution of production and research cannot and should not be frozen. But there is nothing inevitable about a wholesale shift in this key industry to offshore, not least because much of the current acceleration is the result of Chinese government policy. The goal of developing a strong national semiconductor industry is thoroughly understandable from the Chinese perspective. The question is, what should the United States do?

This is not the first time that the U.S. lead in semiconductors has been challenged. In the 1980s, the United States faced a similar challenge, which was met by an innovative policy mix, including the industry consortium SEMATECH and an effective market-opening trade agreement with Japan that stopped dumping in the United States and third markets, thereby helping companies such as Intel to reposition their production toward microprocessors, where U.S. design strengths proved decisive. These policy measures, combined with the innovative capacity of U.S. firms, laid the foundation for the U.S. industry’s growth in the 1990s, which in turn brought major benefits to the U.S. economy. Today’s challenge is not identical, but a cooperative response that involves U.S.-based firms, universities, and government can help the United States maintain its strong position in this industry. The steps that might be taken fall into three different but related areas.

Trade. The most pressing area is that of trade policy, where the United States needs to take immediate bilateral and multilateral action to end China’s discriminatory VAT “tariff.” Failure to act effectively on this issue would likely compromise the effectiveness of most other policy measures the United States might adopt. Bringing China into a rules-based trading system is a laudable goal, but the rules have to work for everyone, including U.S. manufacturing workers.

In addition, outmoded export restrictions on equipment available from foreign suppliers should be dropped to help ensure that the United States remains an attractive export platform in the global economy. Current trade policy has not kept materials out of China, but it has let firms based in other countries make sales that are blocked for U.S.-based companies.

Tax. Countries such as China and Taiwan have implemented substantial tax exemptions, which because of their scope and extremely low effective rates are unlikely to be emulated in a U.S. context. Congress can, however, take some concrete steps to retain manufacturing by U.S. firms–and their tax payments–within U.S. borders. One helpful step would be to reduce the schedule for depreciation allowances from five years to three years, which would more accurately reflect the actual life of semiconductor manufacturing equipment. More broadly, increasing U.S. demand through serious tax incentives for investments in information technology equipment and its manufacture could help increase growth and productivity in the U.S. economy while helping to anchor manufacturing here.

Regular assessment. An in-depth assessment should be undertaken by a government/industry group to determine the challenges facing the U.S. industry, including the scope and impact of current measures by other governments in this sector. Information about other governments should include the level of expenditure and the impact of these policies on the U.S. industry, particularly the U.S. defense industrial base.

R&D. The productivity gains driving the U.S. economy are largely derived from applications of information technologies. These technologies in turn are based to a significant degree on continued progress in semiconductors. Yet the federal government cut support for physics, chemistry, and engineering through much of the 1990s. The trend now seems to be improving, but the lag effects of these cuts, as well as their cumulative impact on university research and the recruitment of new researchers, scholars, and students, are a genuine cause for concern. Some believe that we may be shortchanging future generations by failing to maintain the rate of investment that characterized the U.S. economy in the 1960s. Increasing R&D funding in universities and consortia for disciplines related to the industry would be one effective way to help anchor the industry, with its manufacturing expertise and its jobs, here in the United States.

Specific programs directed at basic research related to semiconductor technologies are also important. Securing the Future recommended that the Microelectronics Advanced Research Corporation’s Focus Centers, in which government and industry jointly support university researchers, be fully funded and, ideally, expanded. Inexplicably, this has not happened. While we are spending billions to improve U.S. security, it is important to remember that the foundations of U.S. military power rest in no small part on U.S. leadership in semiconductor-based technologies for communications, logistics, interception, smart bombs, and more. It is worth recalling that the February 1987 report of the Defense Science Board on Defense Semiconductor Dependency concluded that the erosion of U.S. capability in commercial semiconductor manufacturing posed a national security threat. If anything, the military significance of microelectronics as the decisive advantage for the U.S. warfighter has increased exponentially since the 1980s.

State initiatives. Interestingly, as the federal government has pulled away from forming partnerships with industry and even from simply funding existing programs, states such as Texas and New York have moved vigorously into this policy vacuum. They have made substantial investments, in collaboration with industry, to attract and retain semiconductor research facilities, to build centers of expertise, and to support university research and training programs in order to stay abreast of rapidly evolving technology. New York State’s Centers of Excellence Initiative, created by Governor Pataki, has attracted substantial industry and federal funding to create programs such as the Nanoelectronics Center in Albany, developed in partnership with IBM, and an even larger state/industry cooperative effort for the creation of SEMATECH North. Texas, partly in competition with New York, has redoubled its efforts with its new initiatives at the University of Texas at Dallas that are aimed at building a cooperative arrangement between university researchers and a new private chip manufacturing facility near the campus. These state-based efforts are unusual both in their scale and in the political commitment to ensure their financing even through the current economic and budgetary downturn. Given the resource commitments and limitations of state programs, the federal government might envisage implementing matching grants through which federal funds could “match” state and private-sector contributions to R&D facilities. Some of this is already happening. Direct federal obligations for science and engineering in New York have hit a new record at $1.82 billion, a development that underscores the wisdom of expanding the NSF budget. Cooperative approaches involving public and private financing and independent but focused university research can do much to support U.S. industry while training students to tackle the technical challenges of the future.

Above all, we need to recognize a few simple verities:

  • The semiconductor industry is unique by virtue of its rate of growth, its enormous increases in performance at lower cost, and its implications not only for the economy as a whole but for national security.
  • The rest of the world recognizes the benefits of the industry in job creation, productivity gains, and security enhancement, and many countries are making substantial investments to capture these benefits. Some of these actions (such as improving universities) are what national competition should be about; others, such as discriminatory tax regimes, need to be blocked immediately if long-term damage to the industrial fabric of the United States is to be prevented.
  • The challenge is complex, and partnering among government, industry, and universities is one of the best ways to meet it. Enhanced R&D funding, active trade enforcement, and support for local and regional efforts to retain and strengthen clusters of semiconductor-related activities can help ensure a more dynamic and prosperous future for the U.S. economy and its citizens.

Practical Climate Change Policy

Global climate change policy has reached a stalemate. Europe, Canada, and Japan have ratified the Kyoto Protocol, but it now appears that Russia probably will not, and the Bush administration has ruled out U.S. participation. The treaty puts no obligations on developing countries to curb their rapidly growing emissions. Even if Russia joins and the Kyoto Protocol takes effect, the absence of the world’s largest greenhouse gas (GHG) emitters–the United States and China–means that Kyoto will have little impact on future climate change. If the European Union (EU) and other countries proceed with a rump version of the Kyoto regime, it will accomplish even less. Although bills in the Senate and action by the states continue to inch along, the impact of local policies on a global problem is doubtful. Neither of the polar options typically posed for U.S. policy–join Europe in the Kyoto Protocol or stay out and do nothing–is satisfactory.

It’s time for a new, more pragmatic approach. Smart climate policy does not have to choose between extremes. A pragmatic climate policy would balance benefits and costs, heed warnings without being panicked, recognize uncertainty without being paralyzed, employ economic incentives to accomplish results cost-effectively, and learn from experience with regulatory design. It would engage key countries in a new regime parallel to the Kyoto Protocol (or an adaptation of Kyoto led by the EU) that would allow us to test and evaluate international climate regulation over time instead of making an all-or-nothing choice today.

The United Nations Framework Convention on Climate Change, which provided for international cooperation on climate policy, was adopted in 1992, including by the United States. Its Kyoto Protocol, which adopted binding requirements for net GHG emissions reductions by industrialized countries, was signed in 1997. But the Clinton administration never submitted it to the Senate for ratification, and the Senate voted 95 to 0 to reject any climate treaty that failed to include meaningful participation by major developing countries or to achieve a reasonable balance of costs and benefits. At the follow-on treaty negotiations at The Hague in late 2000, the Clinton administration’s efforts to ensure full scope for international GHG emissions trading and the use of sinks to sequester carbon were blocked by the EU. Then, in early 2001, President Bush openly repudiated the Kyoto Protocol. Many observers expected the Kyoto process to fall apart. Yet in late 2001 at Bonn and Marrakech, the other countries of the world agreed on the details for implementing the regime, and the treaty will take effect if and when Russia ratifies it.

We propose a path that is independent of what happens with the Kyoto Protocol. The United States should engage China, India, and other major developing country emitters, as well as Australia and any other industrialized countries that stay out of the Kyoto Protocol, in one or more new plurilateral arrangements that could cure the defects of Kyoto and gain experience with alternative approaches, rather than being locked into a single monolithic regime that requires agreement among scores of nations before any changes can be made.

Our proposed regime would have the following elements. It would use market-based incentives in the form of an international cap-and-trade system for net GHG emissions that would include both developing and industrialized countries. Because GHGs mix globally, their effects on global climate are unrelated to where emission reductions occur. A cap would be placed on net emissions by all participating countries and their sources. Emissions trading would allow those countries and companies that face high emissions reduction costs to finance emissions reductions in other countries, especially developing countries that enjoy lower costs, benefiting all parties financially and environmentally. Further, it would enlist the resources and ingenuity of the private sector, giving companies powerful incentives to develop emissions-reducing technologies and ways of doing business. The new regime would also include a comprehensive regulatory system covering all GHGs, sinks (such as forests), and economic sectors rather than being limited to a subset of the problem, such as fossil fuel CO2 combustion. Finally, we would set sensible emissions limitations pathways that maximize net benefits to society.

Such a regime would be attractive to the United States, China, and even Europe. It also could yield an unexpected benefit: The Kyoto regime (or an EU-led successor) and the parallel regime could eventually merge, bringing the United States and major developing countries into a global climate policy that is superior to Kyoto. The United States and China will not join a serious climate regime without each other. Joint accession by the United States, China, and other developing countries would provide leverage to persuade Europe to fix the flaws in Kyoto as well as establishing greater price stability in the allowance trading market.

Kyoto’s flaws

Although it did well to adopt a comprehensive approach and international emissions trading, the Kyoto system still restricts their use. For example, it puts numerical limits on the amount of sinks countries can claim to meet their targets, and it gives credit only to new forests while denying credit for conserving existing forests, which harbor greater biodiversity. And it limits emissions trading by excluding developing countries from the main trading system, relegating them to a second and more cumbersome system called the “clean development mechanism” (CDM). The CDM authorizes industrialized countries to earn emissions reduction credits for investments in projects to reduce net GHG emissions in developing countries. The CDM, which has been slow to be implemented, could bring some investment in lower-emitting technologies to developing countries. But the CDM requires no caps on national emissions, so each CDM project may just shift emissions from one place to another with little effect on overall emissions, and it could even increase them. And if CDM credits sell at a price near what developing countries could expect to earn from formal emissions trading, the CDM will hinder rather than foster developing-country accession to a full cap-and-trade regime.

Second, Bonn/Marrakech failed to solve another basic flaw in Kyoto: the omission of developing-country participation in emissions limitations. Developing-country participation in a global limitations effort is essential on both environmental and economic grounds. Developing countries will soon emit more GHGs than do industrialized countries. Moreover, restricting emissions only in some countries will induce emitting activities to shift to unrestricted countries, thereby accelerating the growth in the latter’s emissions and offsetting or even reversing the gains in restricted countries. Over time, such leakage would also make recipient countries even more reluctant to join the climate regime, as their economies become more dependent on emitting activities. In the near term, the mere fear of such leakage inhibits participation by countries contemplating emissions limitations. Omitting developing-country participation from international emissions trading also implies that global emissions abatement must be achieved at a much higher cost while denying developing countries a valuable flow of resources and technology that would help them grow more cleanly while overcoming poverty and disease.

Third, the Kyoto regime set arbitrary and costly emissions limitation targets and timetables not based on reasoned analysis. Kyoto did not address the level or rate of climate change that should be prevented, nor the appropriate limit on concentrations of GHGs in the atmosphere needed to avoid such climate change, nor the best way to limit net emissions to achieve such concentrations. Instead, it set targets as arbitrary percentage reductions below 1990 emissions. Further, Kyoto would require the United States to bear an enormous proportion of the global reductions from projected levels–about 50 to 80 percent of all industrialized country abatement.

We doubt that prompt ratification of Kyoto by the United States would lead to fundamental reform from within that would remedy these basic flaws. The United States would give up much leverage by joining without achieving reforms in exchange. A parallel joint regime with China and other countries and the demonstration that this regime works in practice would offer far greater leverage for persuading Europe to fix the flaws in Kyoto.

The United States should engage China and others in new arrangements that could cure Kyoto’s defects.

The case for action now

The extensive literature on climate science and policy shows that climate change is a serious risk that warrants sensible global regulatory action despite its many uncertainties. Indeed, some uncertainties, such as the risk of abrupt climate shifts, favor more, not less, action. But climate change calls for prudent preventive approaches, not costly crash measures.

Under the international law of treaty adoption by consent, a climate policy regime must yield net benefits not only to the world as a whole, but also to each country that participates. Because the damages from climate change and the costs and benefits of climate protection will vary significantly across countries, designing a regime to attract participation by all major emitters will be quite a feat.

Several studies suggest that the Kyoto Protocol, as currently structured, would probably yield expected benefits less than its expected costs, particularly for the United States. Perhaps for this reason, the Bush administration has rejected the protocol and proposed voluntary measures aimed at reducing U.S. GHG emissions intensity (emissions per unit of economic output), but not necessarily reducing total emissions.

Limiting the growth of GHG emissions can, however, be prudent insurance against the risks of climate change if appropriate regulatory policies are followed. A National Academy of Sciences report requested by the White House in 2001 concluded that rising GHG emissions from human activities are already causing Earth’s atmosphere to warm and that the rate and extent of warming will increase significantly during this century. Recent studies indicate that some initial warming and carbon fertilization may help agriculture in some areas, including Russia, China, and member countries of the Organization for Economic Cooperation and Development (OECD), but will likely have adverse effects in poorer areas. These studies also show that the impact of greater or more rapid warming will worsen worldwide over time, including losses of 1 to 2 percent of gross domestic product (GDP) in the United States and other OECD countries and 4 to 9 percent in Russia and most developing countries, except China, which is forecast to gain about 2 percent of GDP. These estimates do not include the possibility of abrupt changes in ocean currents or other earth systems.

But the Kyoto Protocol would reduce global emissions only enough to avoid a fraction of these future losses, perhaps 10 percent, amounting to a benefit of 0.1 to 0.2 percent of GDP in the United States and other industrialized countries and 0.4 to 0.9 percent of GDP elsewhere. Several economic models put the cost of meeting the Kyoto targets through wholly domestic measures to reduce CO2 emissions at 1 to 3 percent of GDP in the United States and other industrialized countries, clearly exceeding the benefits.

Smart regulatory design, however, can substantially reduce these costs. As detailed below, a comprehensive approach covering all GHGs and sinks and full international emissions trading could in concert reduce the costs by 90 percent or more, to 0.1 to 0.3 percent of GDP, about equal to the estimated benefits. Adding the risk of abrupt climate change and the ancillary benefits of reduction of other pollutants might make the benefits slightly greater than the costs for the United States and would make the benefits significantly greater than the costs globally.

Although staying out of Kyoto could give U.S. industry a competitive advantage over companies in other industrialized countries that are subject to Kyoto’s regulatory burdens, U.S. nonparticipation in any climate regime would also deprive U.S. businesses of valuable commercial opportunities and impose significant business risks. If the United States joined a well-designed climate regime, many U.S. companies could become allowance sellers by achieving low-cost GHG emission reductions and enhancing sinks, and many U.S. firms in the financial, consulting, accounting, legal, and insurance industries could help run emissions trading markets. These opportunities for U.S. business are likely to be foreclosed or sharply restricted if the United States remains on the sidelines. London, not New York, will become the center of global emissions trading; indeed, this is already starting to happen. Ironically, the United States championed international emissions trading and the comprehensive approach for the past 12 years, but is now standing aside while others move first. Britain, Denmark, and Norway are already launching their own domestic CO2 emissions trading systems, and the EU is creating a Europe-wide trading system, also limited to CO2 emissions. These European CO2 emissions trading systems may become the models for the global trading system, restricting the coverage of other gases and sinks and leaving the United States at a disadvantage if it decides to join later.

If the United States stays out of international climate policy, U.S. businesses subject to eventual U.S. domestic emissions limitations, as well as those with operations abroad in industrialized countries that ratify Kyoto, will be unable to enjoy the compliance cost savings provided by international emissions trading. Worse, parties to the Kyoto Protocol, particularly in Europe, may attempt to impose countervailing duties against imports of U.S. goods to compensate for the lower cost of embedded GHG emissions in U.S. production; such “carbon trade wars” could seriously damage global prosperity.

Uncertainty in GHG regulatory policies may also inhibit investment by utilities and other businesses that are already subject to U.S. environmental regulations aimed at air pollutants from sources that also generate GHGs. Capital investments needed to comply with these regulations may be rendered obsolete by the subsequent adoption of GHG regulatory controls that will require additional and different investments to limit CO2 or other GHG emissions from the same facilities. Unless GHGs are added to the regulatory mix sooner rather than later, these businesses will face a period of substantial uncertainty and regulatory risk. Many in industry might even prefer a single integrated regime covering many pollutants to a sequence of separate, fragmented, and potentially inconsistent partial regulations, especially if a comprehensive statute provided for interpollutant emissions trading. At the same time, the absence of credit for early abatement efforts may mean that U.S. firms are holding back on investments, including investments they would have made irrespective of climate policy but are deferring in order to be able to claim GHG emission reduction credits when those credits become available. This drag on investment may be adversely affecting U.S. economic growth.

In addition to the environmental benefits of forestalling climate change and the commercial benefits of participating in and helping to shape an international GHG emissions trading system, the United States would reap strategic benefits on other issues from its willingness to engage in multilateral climate policy. Especially after the September 11, 2001, terrorist attacks, the United States needs the cooperation of other countries to help achieve national security and global economic and political stability. Many other countries on whose cooperation the United States depends are also deeply concerned about climate change; they will bridle at U.S. indifference or intransigence regarding climate issues. On the other hand, if the United States successfully engages China and other major developing countries and helps secure their participation in international GHG regulation, the United States will gain leadership on an important global issue and help strengthen multilateralism in ways that would provide benefits in other areas of international policy. Indeed, along with issues such as trade and terrorism, climate change may be a premier platform for a new U.S.-China strategic partnership. And the United States may benefit from a negotiating strategy that links sensible U.S. cooperation on climate policy to the cooperation of others on issues of greater interest to the United States. If, on the other hand, the United States remains the largest emitter and sits out a climate treaty, severe weather events around the world, such as last summer’s heat wave in Europe, may be blamed, accurately or not, on the United States, and may become a new flashpoint for anti-Americanism.

Economic incentives

Today the global atmosphere is being treated as an open-access resource and as a result is being overused in a classic “tragedy of the commons.” Those who generate GHGs bear only a fraction of the climate risks they generate but would bear the full costs of their own abatement efforts. Accordingly, the atmosphere is overexploited and climate risks are greater than they would be if the full social costs of emissions were appropriately reflected in the decisions of those using the atmosphere for GHG disposal. This is a classic market failure, an “externality” that ordinary market operations and voluntary behavior will not correct. Rules backed by law are needed in order to prevent wasteful overuse of atmospheric resources.

Developing-country participation in a global effort to limit emissions is essential on both environmental and economic grounds.

Such rules could take the form of centralized commands of conduct and technologies, government subsidies of emissions reductions, emissions taxes, or quantity caps on emissions, with or without tradable emissions allowances. The cap-and-trade approach amounts to parceling out limited property rights to use the atmosphere. It underlies the highly successful U.S. program for reducing acid rain through sulfur dioxide allowance trading, as well as the U.S. programs for removing lead from gasoline and chlorofluorocarbons from the atmosphere, and systems worldwide for protecting fisheries. The Bush administration and others should see the use of tradable property rights not as an intrusion on economic growth or sovereignty, but as akin to the familiar parceling out of property rights in land, oil, and other resources that enabled prosperity and stability to thrive in the United States. Thirty years of experience with environmental regulation makes clear that, if properly designed and implemented, incentive-based systems are generally far more efficient and effective than command regulations.

A cap-and-trade system allows flexibility to undertake more of the reductions at those sources that can reduce emissions at lower costs and sell unneeded emissions allowances to sources with higher costs. The U.S. sulfur dioxide emissions trading program has reduced emissions even more than expected and at roughly half the cost of the prior regulatory approach. Because of wide variations in abatement opportunities and costs across countries, numerous analyses find that international emissions trading involving all major emitters, including China, would reduce costs by about 75 percent as compared to wholly domestic CO2 emissions limitations. Moreover, under a trading system, emitting sources bear a cost for each unit of emissions they generate. They thus have strong economic incentives to develop and adopt emissions-reducing technological innovations.

Many economists nonetheless favor emissions taxes over tradable permits for abating GHGs. A key reason is that meeting the quantity cap in a trading system could result in unexpectedly high costs (if the costs of abatement turn out to be greater than anticipated), whereas taxes would limit costs but could let emissions rise. But at the international level, taxes confront two serious problems. First, under the consent rule for the adoption of treaties, they fail to engage developing countries. China and India will not agree to impose taxes on themselves without compensation, but the compensation would likely vitiate the tax. Second, it would be easy for countries to render taxes ineffective by playing behind-the-scenes fiscal cushioning games in which they quietly change other domestic taxes and subsidies to offset the GHG tax. Because trading imposes a quantity cap on emissions (with compensation through extra headroom allowances), it can avoid both of these problems. Further, proposals to curb the costs of a cap-and-trade regime with a “safety valve” (a trigger price at which governments would sell extra allowances) need to be studied in a world of multiple governments who may compete to sell at lower trigger prices, thus letting emissions rise.

Regulation of emissions through a cap-and-trade system is, to be sure, only one tool among several in a sound climate policy. At least four complementary strategies are also warranted. First, public and private investment in low-GHG technology research needs to be accelerated. Second, government should identify and correct policies that blunt existing economic incentives to conserve energy and economize on net GHG emissions, such as perverse government subsidies that exacerbate fossil fuel use and forest clearing. Third, information-based strategies, including public reporting by firms of their net GHG emissions, may also be useful as a means of generating incentives, especially in the early years before a full-fledged regulatory system is implemented. Fourth, governments should invest in helping people adapt to a changing climate. This assistance is especially needed for poorer regions of the world, which lack affordable insurance or access to adaptive technologies.

Such measures, however, will not be enough to reduce emissions on the scale required to moderate the climate risks resulting from the common-pool character of the atmosphere. Reductions in emissions intensity due to general technological advances do not necessarily reduce total emissions. For example, the U.S. Energy Information Administration reports that although U.S. energy intensity declined at 2.3 percent per year from 1970 to 1986, and at 1.5 percent per year from 1987 to 2000, total CO2 emissions in the United States have nonetheless increased more than 30 percent because of the increase in overall economic activity. Although new low-GHG energy technologies may be developed, they will not be adopted unless both producers and consumers have an incentive to demand them. Regulatory incentives can stimulate such demand.

A comprehensive approach

On both environmental and economic grounds, it is imperative that a comprehensive approach that includes all gases, sources, and sinks be adopted in strategies to limit GHG emissions. The comprehensive approach has been advocated by the United States since 1989 and is largely embodied in both the Framework Convention and the Kyoto Protocol. Under the comprehensive approach, emissions limitation targets are defined in terms of a common unit of measurement that includes all the GHGs. An index is used to weight the GHGs’ relative contribution to climate change. A nation or source may choose the mix of limitations of different GHG emissions or sink enhancements that it prefers in order to achieve its net GHG emissions limitation target.

The comprehensive approach is environmentally superior to a CO2-only approach because it would prevent shifts in emissions from regulated sectors and gases (such as CO2 from burning coal) to unregulated ones (such as CH4 from leaky natural gas systems). Because CH4 and other GHGs are significantly more potent warming agents than CO2, even small shifts could exacerbate climate change. The new European CO2 trading directive may invite just this tradeoff. The comprehensive approach would also yield valuable side benefits in reduction of other pollutants, carbon fertilization of plant growth (an attribute that the non-CO2 gases lack), and biodiversity conservation from the incentive to enhance sinks such as forests.

The comprehensive approach is economically superior because it allows flexibility for each country to choose its most cost-effective means to achieve its index-weighted GHG emissions target. Studies at the Massachusetts Institute of Technology, the World Bank, and the Department of Energy have shown that embracing all GHGs would reduce costs by about 60 to 75 percent (or even more if sinks are counted) as compared to regulating fossil fuel CO2 alone. Assuming 60 percent savings from the comprehensive approach and the 75 percent savings from international emissions trading noted above, the combined cost savings could be 90 percent as compared to a CO2-only policy with national caps and no trading.

Criticisms of the comprehensive approach as too complex and difficult to implement are misplaced. Ignoring non-CO2 gases does not make them go away. The environmental and economic gains of the comprehensive approach dwarf any administrative costs it may entail. Simplified default rules can be adopted to deal with cross-gas comparison indices and the uncertainties presented in measuring GHGs such as agricultural CH4 and CO2 sinks, and these rules can provide incentives to improve monitoring and measurement techniques over time.

Setting climate targets

With the improvements we have suggested so far, the Kyoto targets might achieve a close balance between benefits and costs. Our design for climate policy would go a step further, adjusting regulatory targets to maximize net benefits, both to the world and to the United States. We would set emissions limitation targets and pathways based on a benefit-cost calculus. Such a calculus takes into account the facts that the effects of climate change are a function of the total atmospheric concentration of GHGs, which changes quite slowly in response to changes in emissions; that the more serious effects of climate change would occur in the future but can be ameliorated by earlier emissions reductions; and that the costs of reducing emissions can be expected to decline with technological advances and the turnover of the capital stock. In contrast to both the arbitrary Kyoto emissions targets and a least-cost path to stabilize GHG concentrations at an arbitrary level, our approach would set targets that minimize the sum of climate damages and abatement costs over time.

Regulation of emissions through a cap-and-trade system is one tool among several in a sound climate policy.

We believe that maximizing net benefits, including interim benefits, is conceptually preferable to the Kyoto approach, which sets arbitrary emissions targets based on an arbitrary base year, and to a least-cost path to stabilize at an arbitrarily chosen concentration level. With expert advice, countries could negotiate a schedule of future aggregate global emissions targets (with target periods set for, say, every 10 years over the next three or four decades), based on the principle of maximizing net benefits to society. This would require judgments by accountable political officials about the expected benefits and costs of abatement, informed by expert analysis of benefits, costs, uncertainties, and sensitivity analysis. Recent studies indicate that such an approach would be less stringent than the Kyoto targets but more stringent than the least-cost stabilization path.

Setting such pathways will necessarily involve substantial social and political elements. Nonetheless, the judgments involved should be disciplined and made more transparent by the societal benefit-cost framework for decision and the facts and analysis generated in implementing it. And the pathways would be revised periodically in light of experience and new information. The pathways could be used to set targets for successive periods of, say, 10 years. The targets would then be divided into national allowance allocations for each period. Such allowances could then be subassigned by national governments to private firms and traded internationally. This approach would be similar in concept, though not stringency, to the schedule (with allowance trading and banking) successfully employed in the United States to phase out leaded gasoline in the 1980s.

The path forward

The Bush administration was right to question the flaws in Kyoto, but its critics are also right in saying that climate change needs an international approach. We propose a new, third option: The United States, China, and other major developing countries, as well as Australia and other like-minded industrialized countries (perhaps including Russia if it declines to ratify Kyoto) should create a parallel climate treaty that makes full use of the comprehensive approach and emissions trading, includes developing country participation, and sets sensible targets. This approach would be a useful source of experimentation and learning from alternatives to the Kyoto system or an EU-led successor regime. It would be much easier to develop such a regime than to renegotiate the entire Kyoto accord among all countries. And eventually it could merge with the Kyoto or successor regime.

Without the United States and China, the Kyoto regime will amount to little. Without Russia, it will amount to even less. Yet the United States will not act without China for fear of leakage and competitiveness losses and because without China the costs of abatement in the United States will seem too high. And China will not act without the United States both because of perceived unfairness if the United States does not adopt limitations and because China’s major incentive to join a climate regime will be to sell allowances to U.S. companies. Joint action would satisfy both. And if China joins, other major developing-country emitters are likely to follow.

Ideally, the United States and China would reach a high-level accord that brings this system into operation. Another possible approach is for domestic GHG abatement legislation, such as the McCain-Lieberman GHG cap-and-trade bill recently debated in the Senate, or a multipollutant bill that includes GHGs along with sulfur, nitrogen, and mercury, to add provisions making their targets contingent on Chinese participation in a new joint regime and authorizing U.S. companies to meet their targets with allowances purchased from China and other participating countries.

Engaging developing countries such as China and India would reduce global emissions more effectively and would also cut costs by reducing leakage and involving developing countries in emissions trading. But the developing countries have strong economic and equity reasons not to agree to emissions cuts that they fear could compromise their economic development for the sake of an environmental problem they believe was created primarily by and is of primary concern to wealthier countries. Moreover, China and Russia may see some global warming as benign. To attract their participation, the industrialized countries must help finance emissions limitations.

We argue that the most promising method is an international emissions trading system that assigns major developing countries allowances above their existing emissions. That would provide headroom–not hot air–for future growth and profitable allowance sales that attract investment while also reducing costs to industrialized countries. (This was precisely the approach used to attract Russia to participate in Kyoto; but when the major allowance buyer–the United States–withdrew, the value of Russia’s surplus allowances declined sharply, and Russia now sees too little gain from participating, hence its own apparent withdrawal.) Other methods of side payment, such as direct government financial aid, would be less cost-effective and more politically unpopular than creating a new market in competitive business deals. Trade, not aid, will be the best way to engage developing countries in global climate cooperation.

We submit that Europe, Japan, and Russia would prefer our proposal in which the United States and China join simultaneously rather than individually. Allowance prices will rise sharply if the United States (a large net demander) joins alone, hurting Europe and Japan. And allowance prices will fall sharply if China (a large net supplier) joins alone, hurting Russia. In each case, the party likely to be hurt could seek to block accession. But simultaneous accession by the United States, China, and other countries– or the merger of our proposed regime with the Kyoto regime or a successor–would help maintain price stability, enhancing the chances for agreement.

Pragmatism involves attention to benefits, costs, and incentives. It also emphasizes empiricism: learning from experience and revising policies in the light of new information. A pragmatic approach to climate policy would get us out of the straightjacket of the Kyoto monolith and test alternative approaches. Our proposal offers a serious opportunity to compare regimes and improve them over time. Together, the United States and China, with other willing partners, could test a superior alternative design for climate protection while forging a new partnership for global leadership.

Youth, Pornography, and the Internet

The Internet is both a source of promise for our children and a source of concern. The promise is that the Internet offers such an enormous range of positive and educational experiences and materials for our children. Yet children online may be vulnerable to harm through exposure to sexually explicit materials, adult predators, and peddlers of hate. If the full educational potential of the Internet is to be realized for children, these concerns must be addressed.

Although only a small fraction of material on the Internet could reasonably be classified as inappropriate for children, that small fraction is highly visible and controversial. People have strong and passionate views on the subject, and these views are often mutually incompatible. Different societal institutions see the issue in very different ways and have different and conflicting priorities about the values to be preserved. Different communities—at the local, state, national, and international levels—have different perspectives. Furthermore, the technical nature of the Internet has not evolved in a way that makes control over content easy to achieve.

On June 23, 2003, the U.S. Supreme Court upheld the constitutionality of the Children’s Internet Protection Act (CIPA). Enacted in December 2000, CIPA requires schools and libraries that receive federal funds for Internet access to block or filter access to visual depictions that are obscene, child pornography, or material “harmful to minors.” The term “harmful to minors” is taken to mean material that if “taken as a whole and with respect to minors, appeals to a prurient interest in nudity, sex, or excretion; depicts, describes, or represents, in a patently offensive way with respect to what is suitable for minors, an actual or simulated normal or perverted sexual act, or a lewd exhibition of the genitals, and taken as a whole, lacks serious literary, artistic, political, or scientific value to minors.”

CIPA also allows, but does not require, giving an authorized person the ability to disable the technology protection measure during any use by an adult to enable access for bona fide research or other lawful purpose.

The Supreme Court decision on CIPA is unlikely to settle the public debate on how best to protect children on the Internet from inappropriate materials and experiences such as pornography and sexual predators. Indeed, nothing in the Court’s decision changes the basic conclusion of the 2002 National Research Council (NRC) committee report Youth, Pornography, and the Internet that social and educational strategies to teach children to use the Internet responsibly must be an essential component of any approach to protection, and is one that has been largely ignored in the public debate.

Although technology and public policy have helpful roles to play, an effective framework for protecting children from inappropriate sexually explicit materials and experiences on the Internet will require a balanced mix of educational, technical, legal, and economic elements that are adapted appropriately to the many circumstances that exist in different communities. An apt, if imperfect, analogy is the relationship between children and swimming pools. Swimming pools can be dangerous for children. To protect them, one can install locks, put up fences, and deploy pool alarms. All of these measures are helpful, but by far the most important pool protection measure for children is to teach them to swim.

Approaches to protection

There are three elements to a balanced framework for protecting children online: Public policy and law enforcement, technology, and education.

Public policy and law enforcement. Effective and vigorous law enforcement can help deter Internet pornography and diminish the supply of inappropriate sexually explicit material available to children. For practical and technical reasons, it is most feasible to seek regulation of commercial sources of such material. The pornography industry seeks to draw attention to its products, whereas noncommercial sources of sexually explicit materials generally operate through private channels. In fact, however, there has been a virtual hiatus in federal obscenity prosecutions during the very time when Internet usage has been exploding. Vigorous efforts against operators of commercial Web sites that carry sexually explicit material that is clearly obscene under any definition would help to clarify existing law so as to make it a useful tool in reducing the supply of such material.

On the other hand, for a few hundred dollars anyone can buy a digital camera and a Web site and produce sexually explicit content, publishing it on the Web for all to see. And because the Internet is global, law enforcement and regulatory efforts in the United States aimed at limiting the production and distribution of such material are difficult to apply to foreign Web site operators, of which there are many. Without a strong international consensus on appropriate measures, it is hard to imagine what could be done to persuade foreign sources to behave in a similar manner or to deny irresponsible foreign sources access to U.S. Internet users.

Other aspects of public policy can also help to shape the Internet environment in many ways. For example, we can seek to promote media literacy and Internet safety education, which could include the development of model curricula; support for professional development materials for teachers on Internet safety and media literacy; and outreach to educate parents, teachers, librarians, and other adults about Internet safety education issues. In addition, public policy can support the development of and access to high-quality Internet material that is educational, age-appropriate, and attractive to children and also encourage self-regulation by private parties.

Technology-based tools. Technology-based tools, such as filters, provide parents and other responsible adults with additional choices about how best to fulfill their responsibilities. A great many technology-based tools are available for dealing with inappropriate Internet material and experiences. Filters (systems or services that limit in some way the content that users can gain access to) are the most-used technology-based tool. Filters can be highly effective in reducing children’s exposure to inappropriate sexually explicit material, but there is a tradeoff. Filters also reduce access to large amounts of appropriate material. For many, that is an acceptable cost. On NRC committee site visits, teachers and librarians commonly reported that filters served primarily to relieve political pressure on them and to insulate them from liability, which suggests that filter vendors are likely to err on the side of overblocking. In addition, filters reduce nonproductive demands on teachers and librarians, who would otherwise have to spend time watching what students and library patrons were doing. Note, however, that filters can be circumvented; the easiest way to do so is to obtain unfiltered Internet access in another venue, such as at home.

Monitoring a child’s Internet use is another technology-based option. Many monitoring methods are available, among them remote viewing of what is on a child’s screen, logging of keystrokes, and recording of Web pages visited. Each of these options can be used surreptitiously or openly. Surreptitious monitoring cannot deter deliberate access to inappropriate material or experiences. It also raises many concerns about privacy that are similar to other family privacy concerns, such as whether parents should read children’s diaries or search their rooms covertly. Furthermore, although monitoring may reveal what a child is doing online, it presents a dilemma for parents because taking action against inappropriate behavior may also reveal the monitoring.

The major advantage of monitoring over filtering is that it leaves children in control of their Internet experiences and thus provides opportunities for them to learn how to make good decisions about Internet use. However, this outcome is likely only if the child is subsequently educated to understand the nature of the inappropriate use and the desirability of appropriate use. If instead the result is simply punishment, then whenever monitoring is absent, inappropriate use may well resume. Clandestine monitoring may also have an impact on the basic trust that is the foundation of a healthy parent-child relationship.

Online age-verification technologies seek to differentiate between adults and children. One way of doing that is to request a valid credit card number. Credit cards can be effective in separating children from adults, but their effectiveness will decline as credit card-like payment mechanisms for children become more popular. Other ways of verifying age can provide greater assurance that the user is an adult, but almost always at the cost of inconvenience to legitimate users.

There has been a virtual hiatus in federal obscenity prosecutions during the very time when Internet usage has been exploding.

Much more research on these technologies is clearly justified. The computer industry has produced extraordinary business success and some of the largest personal fortunes in U.S. history. Yet it has not committed a significant amount of its resources to leading edge R&D for the protection of children on the Internet.

Social and educational strategies. Social and educational strategies are intended to teach children how to make wise choices about how they behave on the Internet and how to take control of their online experiences: where they go, what they see, what they do, to whom they talk. Such strategies must be age-appropriate if they are to be effective. Furthermore, such an approach entails teaching children to be critical and skeptical about the material they are seeing.

Perhaps the most important social and educational strategy is responsible adult involvement and supervision. Peer assistance can be helpful as well, since many young people learn as much in certain areas from their friends or siblings as they do from parents, teachers, and other adult figures. Acceptable-use policies in families, schools, libraries, and other organizations provide guidelines and expectations about how people will conduct themselves online, thus providing a framework in which children can become more responsible for making good choices about the paths they choose in cyberspace, a skill helpful for any use of the Internet.

Internet safety education is analogous to safety education in the physical world. It may include teaching children how sexual predators and hate-group recruiters typically approach young people and how to recognize impending access to inappropriate sexually explicit material. Information and media literacy can help children recognize when they need information and how to locate, evaluate, and use it effectively, irrespective of the media in which it appears. They can also learn how to evaluate the content in media messages. Children with these skills are less likely to stumble across inappropriate material and more likely to be able to put it into context if they do. More compelling, safe, and educational Internet content that is developmentally appropriate, and enjoyable material on a broad range of appealing or useful topics, may help make some children less inclined to spend their time searching for inappropriate material or engaging in unsafe activities.

Social and educational strategies focus on nurturing personal character, encouraging responsible choices, and strengthening coping skills. Because these strategies place control in the hands of the children, the children have opportunities to exercise some measure of choice. As a result, some are likely to make mistakes as they learn these lessons.

These strategies are not inexpensive, and they require tending and implementation. Adults must learn to teach children how to make good choices on the Internet. They must be willing to engage in sometimes difficult conversations. They must face the tradeoffs that are inevitable with demanding work and family schedules. But in addition to teaching responsible behavior and coping skills for encounters with inappropriate material and experiences on the Internet, this instruction will help children think critically about all kinds of media messages, including those associated with hate, racism, and violence. It will also help them conduct effective Internet searches for information and to make ethical and responsible choices about Internet behavior—and about non-Internet behavior as well.

Understanding complexities

Despite heated public rhetoric that often reduces debate to slogans, the problem of protecting children on the Internet is genuinely complex. Some of the most important complexities include:

The term “pornography” lacks a well-defined meaning. There may be broad agreement that some materials are or are not pornographic, but for other materials, individual judgments about what is pornography will vary. In recognition of this essential point, the term “inappropriate sexually explicit material” was used in our report to underscore the subjective nature of the term. “Protection” is also an ambiguous term. For example, does protection include preventing a child from obtaining inappropriate material even when he or she is deliberately seeking it? Or does it mean shielding a child from inadvertent exposure? Or does it entail giving children tools to cope effectively if they should come across it?

Supreme Court precedent supports the constitutionality of differing standards for adults and children regarding material to which they may be allowed access; this is the basis of differing standards for material that is “obscene” and “obscene for minors.” However, its ruling on CIPA notwithstanding, the Supreme Court has also held that measures taken to protect children from material that is “obscene for minors” must not unduly infringe on the rights of adults to have access to this material.

There is no clear scientific consensus regarding the impact of children’s exposure to sexually explicit material. Nonetheless, people have very strong beliefs on the topic. Some people believe that certain sexually explicit material is so dangerous to children that even one exposure to it will have lasting harmful effects. Others believe that there is no evidence to support such a claim and that the impact of exposure to such material must be viewed in the context of a highly sexualized media environment.

Although it is likely that there are some depictions of sexual behavior whose viewing by children would violate and offend the collective moral and ethical sensibilities of most people, protagonists in the debate would probably part company on whether other kinds of sexual material are inappropriate or harmful. Such material might include graphic information on using a condom or descriptions of what it means to be lesbian or homosexual. As a general rule, this information does enjoy First Amendment protection, at least for adults and often for children.

Perhaps the most important social and educational strategy is responsible adult involvement and supervision.

Children may well be more sophisticated than adults with respect to technology. These “digital children” have never known a world without personal computers, and many have been exposed to the Internet for much of their lives. They also have the time and the inclination to explore the technology. The result is that, compared to their parents, they are more knowledgeable about what can happen on the Internet. Adults cannot assume that their children’s online experiences are anything like their own experiences in workplace cyberspace. A teenager testified to the NRC committee that, knowing her mother would “freak out” at the online solicitations and invitations to view commercial sexually explicit material she was receiving, she simply set up an AOL account for her mother with parental controls set to “young teen,” thereby blocking her mother’s receipt of such material. Her mother, not knowing what was being blocked, expressed surprise that her own online experience was much less intrusive than she had been led to believe would be the case. This testimony is consistent with a study undertaken by the Girl Scout Research Institute, which reports that “30 percent of girls [responding to the study] had been sexually harassed in a chat room, but only 7 percent told their mothers or fathers about the harassment, most fearing their parents would overreact and ban computer usage altogether.”

All mechanisms for determining whether material is appropriate or inappropriate will make erroneous classifications from time to time. But misclassifications are fundamentally different from disagreements over what is inappropriate. Misclassifications are mistakes due to factors such as human inattention or poorly specified rules for automated classification. They are inevitable even when there is no disagreement over classification criteria. In contrast, disagreements over what is appropriate result from differences in judgment about what material is suitable for children.

Deliberate viewing of sexually explicit material on the Internet is very different from inadvertent viewing. Technologically sophisticated teenagers determined to obtain such material will invariably find a way to do so. They will circumvent school-based filters by using home computers and circumvent home-based filters by using a friend’s computer. So the real challenge is to reduce the number of children who desire to look at inappropriate content. This, of course, is the role of social and educational strategies that build character, that teach appropriate Internet use and respect for other people. By contrast, inadvertent viewing (resulting, for example, from mistyped Web addresses or the ambiguities of a language where the word “beaver” has both sexual and nonsexual connotations) may be addressed more effectively by technology and education that reduce the number of such accidents and teach children how to deal with them when they do happen.

Beyond CIPA

The CIPA decision affects only schools and libraries that use federal funds to provide Internet access. Given the ubiquity of Internet access for young people and their sophistication about technology, parents, teachers, librarians, and the technology community, among others, have many opportunities for action that will help to protect children. Three of the most important are:

Educating young people to conduct themselves safely and appropriately on the Internet. This continues to be basic to their online protection. Therefore, parents must learn about the Internet from their children’s perspective and find the time in their busy days to talk with their children about safe and appropriate behavior. A National Academies website (www.netsafekids.org) has useful information for parents. Teachers and librarians have opportunities to find or develop good educational materials for Internet safety and appropriate behavior, and to use these materials when they interact with children (or their parents).

Ensuring that libraries providing mandatory filtered Internet access for patrons institute smooth procedures for requesting filter removal that are not burdensome for the user. Indeed, the majority justices in the CIPA decision noted that although the statute was constitutional on its face, it could still be unconstitutional if it were implemented in ways that unduly infringed on the ability of adults to remove filtering from their own use.

Filtering improvements that would help reduce the concerns about inappropriate blocking. Technology vendors could develop more useful filters that would be better able to tell the difference between restricted and unrestricted material; have default settings configured to be minimally restrictive, blocking only types of material that are obscene in the legal sense; indicate why blocked sites were being blocked; and provide ways of overriding blocks that are secure and usable with minimal hassle and delay.

Contrary to statements often made in the political debate, there is no single or simple answer to controlling the access of minors to this sort of material on the Web. To date, efforts to protect children have focused mostly on technology-based tools such as filters. But technology, especially today’s technology, cannot provide a complete or even a nearly complete solution. Nor can more effective law enforcement, on its own, dry up the supply of offensive material. Although technology and law enforcement have important roles to play, social and educational strategies to develop in minors an ethic of responsible choice and the skills to implement these choices and cope with exposure are central to protecting them from the negative effects of exposure to inappropriate material or experiences on the Internet.

In concert with social and educational strategies, both technology and public policy can contribute to a solution if they are adapted to the many circumstances that exist in different communities. In the end, however, values are closely tied to the definitions of responsible choice that parents or other conscientious adults impart to children, and to judgments about the proper mix of education, technology, and policy to adopt. Though some might wish otherwise, no single approach—technical, legal, economic, or educational—will suffice. Protecting our children on the Internet will require moving forward on all these fronts.

On Sexual Predation

The Internet enables strangers to establish contact with children. Although many interactions between children and strangers can be benign or even beneficial, as in the case of a student corresponding with a university scientist, strangers can also be predators and sexual molesters. So Internet-based interaction via chat rooms, instant messages, and e-mail dialogs can lead to face-to-face contact with such individuals that may be traumatic and even life-threatening for a child. This possibility thus poses far greater danger than does the mere passive receipt of even highly inappropriate sexual material. The anonymity and interaction at a distance of the Internet prevent a child from using cues such as gestures, tone of voice, and age that are present during face-to-face interaction and help one person judge another’s intent.

Only a small percentage of youth Internet users in a 2001 survey had received an aggressive sexual solicitation. A particular pattern of behavior often characterizes a child’s online conversation with a potential predator in a chat room or via instant messaging. In a typical interaction, the predator begins with dialogue that is entirely innocuous. Over time (perhaps weeks or even months), the predator grooms his target, seeking to build rapport and trust. No single piece of dialogue is necessarily sexual or even suggestive, but as the victim begins to trust the anonymous predator, conversations become increasingly personal. Young adolescents are strongly motivated by the need to separate from parental authority and to gain acceptance for their growing adulthood. Furthermore, they are usually inexperienced in dialogue with adults (especially adults who employ cunning and guile) and are likely to be relatively honest in sharing their emotional states and feelings. Predators play on this naivete and need for acceptance. Sexually explicit comments and/or material are unlikely to be part of this dialogue in its early stages, and may never emerge online. Or such comments are introduced slowly in order to reduce the inhibitions of the victims and make them more willing to meet with those who desire to prey on them.

For parents, the fear that their children will be physically harmed by predators they have met through the Internet far outweighs the fear of exposure to pornography. These fears are perhaps accentuated by the fact that for some young people, face-to-face meetings with someone initially encountered online are an accepted part of life, and are more common among girls than boys. These face-to-face encounters occur at parties, at malls, and in the home. Many young people know that they should not give out personal information, such as their real names and addresses, but they are often overconfident about their ability to make sensible judgments in potentially dangerous situations. The NRC committee spoke to one teenage girl who appeared to understand that people often lie online, that meeting Internet acquaintances face-to-face can be dangerous, and that people are not always what they seem. However, when asked if she would meet someone from the Internet, she said, “Sure—if I had any doubts about him, I would never do so—and I only do it with people I know are OK.”

New Economy Lite

Roger Alcaly’s title overpromises. Despite this, the book contains a useful introduction for a lay reader to a set of topics that academic economists interested in the interaction between high technology and U.S. economic growth have been struggling with over the past decade.

Alcaly, a partner at an investment management firm, argues that the U.S. economy underwent a structural change in the mid-1990s. Linked to the increasing use of information and communication technologies in business, this change pushed the United States permanently onto a faster track of output and productivity growth. The recent travails of the U.S. economy, in his view, although a necessary antidote to the excesses of the late 1990s, are destined to recede as the country resumes its new and faster economic pace.

This thesis basically occupies two of the six substantive chapters of the book. In one of these two chapters, he argues that increases in U.S. productivity growth rates in the latter part of the 1990s—linked to increasing use of computers and communications technology in the economy—are what define the “new economy.” In the second of these chapters, he signs on to the argument that the reason it took almost half a century for the development of modern computer technology to result in a spike in productivity growth is that a certain “critical mass” of technology use is needed in order for a technological revolutions to have a major impact, as users gradually learn how to use the new technology.

In the first of these two chapters, Alcaly does a decent, if incomplete, job of surveying two rapidly growing and somewhat disorderly academic economics literatures that are interrelated. One of those literatures concerns the measurement of prices for high-tech goods and services, whose quality has improved dramatically over time, and where the conventional price indexes constructed by government statistical agencies have lagged badly in tracking these changes. (I should disclose that I am most familiar with and have contributed to this literature.)

The other concerns the measurement of productivity growth rates. So-called “total factor productivity” is calculated by deducting the imputed contributions of inputs to production (such as labor; various kinds of conventional capital goods; high-tech capital goods such as computers, communications, and software; etc.) from growth rates of the output, and the “residual”—the difference between what happened and what you can impute to the impact of growth in the inputs you have measured—is then interpreted as the effect of things you don’t understand well, most notably technology. In recent years, economists (including Dale Jorgenson and Kevin Stiroh in the pages of this publication) have charted a marked increase in total factor productivity growth in the latter part of the 1990s.

The two literatures are related in that recent studies that find an uptick in productivity growth since 1995 make use of new and improved price indexes for high-tech goods such as semiconductors, computers, communications equipment, and software. These new indexes have dramatically changed the historical picture of the U.S. economy painted by government statistics. (Interestingly, Alcaly’s own tables show many periods of productivity growth in the “old economy” roughly equal to or even exceeding the post-1995 surge he identifies with the coming of the new economy.)

A question then arises: If it is computers that are causing the uptick, why did it take so long? Computers, after all, were invented in the 1940s, computer sales have been growing at a rapid rate since the 1950s, and academic studies seem to show computer prices falling at roughly 20 percent annual rates of decline since the early 1950s. Although there is evidence that coincident with the PC boom of the 1980s, those rates of decline in computer prices increased and continued to increase in the 1990s, computer prices have been dropping like a rock since at least the 1950s, while computer sales have been growing rapidly, year after year after year. Why the long delay in increasing productivity growth rates?

In the second of these two chapters, Alcaly seizes on the argument advanced by some economists (notably Paul David, who takes as an analogy the diffusion of electric power in the U.S. economy) that it simply takes a long time for businesses and consumers to learn how to properly exploit new technology. Although this argument is largely an exercise in telling a convincing story consistent with known facts, there is also some empirical support, notably statistical studies showing lags of roughly a decade between computer investments and a measurable increase in productivity growth in particular industries. But the decade of lag suggested by statistical studies is very different from a lag of almost 50 years. The discontinuity in productivity growth in the face of continuously increasing computer investments begs for further explanation. To suggest, as Alcaly (and others) do, that “critical mass” is important simply moves our ignorance to another plane.

Dale Jorgenson has speculated in this publication that the root cause of the productivity resurgence of the 1990s was a sharp acceleration in the rate at which semiconductor prices were declining and the impact of significantly cheaper chips on the chip-consuming computer and communications sectors. This is an attractive explanation, since other studies (my own, for example, with Ana Aizcorbe of the Federal Reserve) suggest that semiconductor price drops accounted for from 40 to 60 percent of price declines in computers in the late 1990s. This acceleration in technological innovation in semiconductors, therefore, resulted in acceleration in computer price declines; and if productivity gains in user industries are associated with rapid technological change, may be an important factor in our recent productivity bonanza.

Alcaly notes the faster rate of decline in semiconductor and computer prices after 1995 and is quick to credit part of the surge in computer investment—and productivity growth—to this source. Unstated is the apparent contradiction between arguing that there is a long lag in realizing the economic impact of new technology but a very quick response to a change in the pace of price decline by businesses making use of that same technology.

Alcaly finishes these two chapters by predicting that the productivity growth increases of the 1990s will continue to be greater than the pre-1995 pace once we emerge from the current economic slowdown. Assuming that the acceleration in semiconductor price declines continues, this seems a safe prediction.

It is best to skip the short history of the semiconductor industry included by Alcaly in this portion of his book, since it adds nothing to existing accounts and includes some notable inaccuracies. For example, he writes that the transistor “did not evolve into the microprocessors that run personal computers until the late 1970s” and that Ted Hoff in 1971 “developed the silicon-etching process that made the microprocessor possible.” The first commercial microprocessor was actually produced in 1971; more sophisticated 8-bit microprocessor designs that were used in early personal computers were shipping by 1974. Hoff gets the credit for realizing that Intel’s manufacturing process (which he did not develop) made it possible to cram enough components on a single chip to produce a complete microprocessor; Hoff’s colleague Federico Faggin led the team responsible for the actual layout and design of that microprocessor Alcaly also gives major play to some provocative but highly speculative theories: Tom Wolfe’s arguments that “small-town” culture in general, and Grinell College in particular, played a major role in the development of the semiconductor industry and that the development of the Visicalc spreadsheet software in the 1970s was “instrumental in launching the takeover wave of the 1980s.” Maybe, but maybe not.

Next, Alcaly includes a puzzling chapter that takes fairly gratuitous shots at John Kenneth Galbraith’s 1967 book The New Industrial State. Galbraith is lambasted as the poster boy for erroneous predictions that the economy was going to evolve into a set of increasingly bureaucratized and oligopolistic industrial firms, rather than the lean, competitive, dynamic players populating the new economy. Short and not particularly good histories of the steel and computer industries are provided as evidence of Galbraith’s failings at prognostication.

The remaining four substantive chapters of the book seem to redefine the new economy to include whatever dominated discussions in business sections of newspapers over the past 15 years. There is a chapter on lean and flexible production systems at Toyota, Dell, and others, and on the importance of new workforce practices. But Toyota practices go back to the 1960s and were imitated by U.S. competitors since at least the 1980s. The connection to a new economy defined around the productivity surge of the late 1990s seems less than clear.

A chapter on the stock market takes us through the speculative bubble of the late 1990s. The resounding claim at the end of this chapter that “it is the development of extraordinary new firms and the above-average rates of productivity and economic growth they help to generate . . . that make economies new” would seem to push the temporal bounds of the new economy back decades, if not centuries.

Next comes a chapter on junk bonds and takeovers, which is one of the most rewarding in the book. The discussion of the interrelated topics of acquisitions and their finance is interesting and insightful. Instead of summarizing the literature of others, in which he has merely dipped his toe, the author is writing about a subject he knows and knows well. Indeed, he mainly skips the academic literature that he relies on earlier in the book. But the connection to the new economy is completely lacking. He makes no attempt to argue that this had anything to do with the recent resurgence in productivity or that it even is new. Indeed, Alcaly notes that junk bonds were widely used in the 1920s (and, I might add, in the 1820s!).

A final chapter on monetary policy says that Alan Greenspan, and before him Paul Volcker, have done atypically good jobs at stimulating economic growth. Quoting Milton Friedman and Ana Schwarz, he attributes their success to superior leadership. Alcaly concludes that the economy reinvented itself in the late 1990s and that the resulting high growth rates are likely to continue in the next century.

In sum, this book would be best consumed as a tapas menu of small appetizers. A carefully selected choice of chapters can make a light but stimulating meal, but avoid the other stuff.

End of the world?

The lights in my office at the University of Maryland blinked once and then went out. It was 4:11 p.m. on Thursday, August 14. The Great Blackout of 2003 had just turned off the lights across the entire northeastern United States and parts of Canada. It was the most extensive electrical blackout in history. The state of Maryland, however, is below the southern boundary of the blacked-out grid, and elsewhere in the state people were going about their normal business unaware of the chaos further north. Yet somehow, one finger from the Great Blackout reached beyond the northeastern grid, down to the sprawling University of Maryland campus, and switched off the lights, while outside the campus the lights continued to burn. No one seemed to quite understand how it had happened.

The power grid could be a metaphor for our modern scientific world. The purpose of the grid is clear: Unlike other utilities, electric power cannot be stored. The power company must generate the exact amount of power that is being used, literally responding to every electrical switch that is thrown. Linking separate power companies in a vast grid is meant to use the statistics of large numbers to smooth out fluctuations, thus reducing the likelihood of local blackouts.

Unfortunately, the power grid, like the human body, is the product of evolution rather than design. And like the body, the grid is burdened with vestigial organs that no longer serve a clear purpose and nerve connections that are no longer relevant. The power grid has become so complex that no one fully understands it. Instead of absorbing a relatively small local disruption in Ohio, the problem cascaded throughout the entire grid, shutting down state after state. Has modern technology made civilization too complex to manage?

A leading scientist, Sir Martin Rees, England’s Astronomer Royal, gives us his sober assessment of the prospects for the survival of modern civilization in Our Final Hour. The more technologically complex the world becomes, the more vulnerable it is to catastrophe caused by misjudgments of the well-intentioned as well as deliberate acts of terrorism. Complexity leaves us vulnerable to natural disasters and simple human blunders, as well as low-tech terrorist attacks. The irony is that even the elaborate defenses we construct to protect ourselves could become the instruments of our destruction. Edward Teller once proposed a vast armada of nuclear missiles in parking orbits, weapons at the ready, to be dispatched on short notice to obliterate any asteroid that threatened Earth. Most people would rather take their chances with the asteroid.

Rees’ perspective is that of a cosmologist, but he is above all a human: “The most crucial location in space and time,” he writes, “could be here and now. I think the odds are no better than fifty-fifty that our present civilization on Earth will survive to the end of the present century. Our choices and actions could ensure the perpetual future of life (not just on Earth, but perhaps far beyond it, too). Or in contrast, through malign intent, or through misadventure, twenty-first century technology could jeopardize life’s potential, foreclosing its human and posthuman future. What happens here on Earth, in this century, could conceivably make the difference between a near eternity filled with ever more complex and subtle forms of life and one filled with nothing but base matter.”

What follows is a set of brilliant essays forming more or less independent chapters that could be read in any order. He does not ignore the continued threat of nuclear holocaust or collision between Earth and an asteroid, but we have lived with these threats for a long time. His primary focus is on 21st century hazards, such as bioengineered pathogens, out-of-control nanomachines, or superintelligent computers. These new threats are difficult to treat because they don’t yet exist and may never do so. He acknowledges that the odds of self-replicating nanorobots or “assemblers” getting loose and turning the world into a “grey-goo” of more assemblers are remote. After all, we’re not close to building a nanorobot, and perhaps it can’t be done. But this, Rees points out, is “Pascal’s wager.” The evaluation of risk requires that we multiply the odds of it happening (very small) by the number of casualties if it does (maybe the entire population).

Personally, I think the grey-goo threat is zero. We are already confronted with incredibly tiny machines that devour the stuff around them and turn it into replicas of themselves. There are countless millions of these machines in every human gut. We call them bacteria and they took over Earth billions of years before humans showed up. We treat them with respect or they kill us.

So why isn’t Earth turned into grey-goo by bacteria? The simple answer is that they run out of food. You can’t make a bacterium out of just anything, and they don’t have wings or legs to go somewhere else for dinner. Unless they can hitch a ride on a wind-blown leaf or a passing animal, they stop multiplying when the local food supply runs out. Assemblers will do the same thing. You should find something else to worry about.

But that’s just my vote. As Rees puts it, “These scenarios may be extremely unlikely, but they raise in extreme form the issue of who should decide, and how, to proceed with experiments that have a genuine scientific purpose (and could conceivably offer practical benefits), but that pose a very tiny risk of an utterly calamitous outcome.” The question of who should decide, I would argue, is the most important issue raised by this issue-filled book.

Rees recounts the opposition to the first test, at Brookhaven National Laboratory, of the Relativistic Heavy Ion Collider (RHIC). The accelerator is meant to replicate, in microcosm, conditions that prevailed in the first microsecond after the Big Bang, when all the matter in the universe was squeezed into a quark-gluon plasma. However, some physicists raised the possibility that the huge concentration of energy by RHIC could initiate the destruction of Earth or even the entire universe. Every scientist agreed that this was highly unlikely, but that wasn’t very comforting to the nonscientists whose taxes paid for RHIC.

The universe survived, but this sort of question will come up again and again. Indeed, if we try hard enough we can probably imagine some scenario, however unlikely, that could conceivably lead to disaster in almost any experiment. Rees urges us to adopt “a circumspect attitude towards technical innovations that pose even a small threat of catastrophic downside.”

But putting the brakes on science, which excessive caution would do, also has a downside. The greatest natural disasters in our planet’s history were the great extinctions produced by asteroid impacts. If astronomers were to discover a major asteroid headed for a certain collision with Earth in the 22nd century, we could, for the first time in history, make a serious attempt to deflect it. Had HIV appeared just a decade earlier, we would have been unable to identify the infection until full-blown symptoms of AIDS appeared. The AIDS epidemic, as terrible as it has been, would have been far, far worse.

“The theme of this book,” Rees concludes, “is that humanity is more at risk than at any earlier phase in its history. The wider cosmos has a potential future that could even be infinite. But will these vast expanses of time be filled with life, or as empty as the Earth’s first sterile seas? The choice may depend on us, in this century.”

Prime Time Science

Sometimes fantasy beats reality—at least on TV.

For decades, the science community has been waiting for a popular TV program that features scientists as engaging dramatic characters and the work of science as exciting as well as intellectually rigorous. Other professionals had established themselves in the spotlight. Lawyers were everywhere from Perry Mason to L.A. Law to The Practice. Physicians from Dr. Kildare to the team on ER have been mainstays of TV programming, though biomedical researchers remained off-camera. Even high school teachers became TV regulars while scientists remained on the sidelines.

Science finally made it onto the little screen with CSI (Crime Scene Investigator), a show so popular that it quickly gave birth to the spin-off CSI: Miami and now a group of imitators. Of course, the Las Vegas crime lab is not exactly Los Alamos National Laboratory or the National Institutes of Health, but scientists should be pleased that tens of millions of viewers, particularly young people, are tuning in every week to see people in lab coats looking into microscopes, performing chemical analyses, and creating models on computers. Even better, they are asking pressing questions, posing hypotheses, designing experiments, and evaluating the results scrupulously to see if they support the hypotheses. Not only that, these are attractive people with romantic interests and family problems.

Most of the action takes place in a laboratory building that could be featured in Architectural Digest and that is outfitted with technology that could come from the 2010 Sharper Image catalog. The show’s Web site explains the use of gas chromotographs, microspectrophotometers, and thermocyclers. Although the lighting is better suited to a dance club than a lab, it looks great on TV. The computer graphics are better than video games. The images of bullets ripping through internal organs are gripping, even frightening. The music that plays while the stars stare intently at computer screens or carefully drip reagents into a beaker makes it clear that their work is both fascinating and way cool.

But why did the networks have to pick forensic science? As the articles in this issue make clear, forensic science as it has been practiced is not exactly the poster child for how science should be done. Wishful thinking has replaced rigor in the evaluation of polygraphs, fingerprinting, hair analysis, ballistics, handwriting identification, and other forensic procedures. The forensic science community has cast a veil over its work, avoided asking the hard questions, and kept mum about known weaknesses in the science underlying the techniques used in police investigations and offered as evidence in court.

Fortunately, it doesn’t matter. Although this may come as a shock, TV is not reality. To paraphrase the late film critic Pauline Kael, TV is a little world without much gravity. Even the more literate and serious shows, such as West Wing and The Sopranos, which try to create characters with some semblance to human beings, are simply imaginary worlds of brilliantly witty White House staffers and introspective mobsters.

Of course, the CSI lab is not realistic, but it creates an appealing fantasy that scientists should applaud. It will stimulate youthful interest in forensic science and in all science. It promotes the virtues of care, skeptical empiricism, and honesty. As forensic scientists bask in their newfound celebrity, they might even be moved to practice these virtues a bit more actively in their own work.

The Limits of the Polygraph

Developed almost a century ago, the polygraph is the most famous in a long line of techniques that have been used for detecting deception and determining truth. Indeed, for many in the U.S. law enforcement and intelligence communities, it has become the most valued method for identifying criminals, spies, and saboteurs when direct evidence is lacking. Advocates of its use can plausibly claim that the polygraph has a basis in modern science, because it relies on measures of physiological processes. Yet advocates have repeatedly failed to build any strong scientific justification for its use. Despite this, the polygraph is finding new forensic and quasi-forensic applications in areas where the scientific base is even weaker than it is for the traditional use in criminal trials. This is a very troubling, because these new uses are based on overconfidence in the test’s accuracy.

In recent years, and especially since the 2001 terrorist attacks, the U.S. public seems to have become far more willing to believe that modern technology can detect evildoers with precision and before they can do damage. This belief is promulgated in numerous television dramas that portray polygraph tests and other detection technologies as accurately revealing hidden truths about everything from whether a suitor is lying to prospective parents-in-law to which of many possible suspects has committed some hideous crime. Unfortunately, the best available technologies do not perform nearly as well as people would like or as television programs suggest. This situation is unlikely to change any time soon.

Although there is growing pressure from some constituencies to expand the use of polygraph testing in forensic and other public contexts, it would be far wiser for law enforcement and security agencies to minimize use of the tests and to find strategies for reducing threats to public safety and national security that rely as little as possible on the polygraph. Courts that are skeptical about the validity of polygraph evidence are well justified in their attitude.

Legal precedents

An unsuccessful attempt to introduce a polygraph test in a District of Columbia murder case in the 1920s led to a famous court decision. A trial judge’s refusal to allow the testimony of William Moulton Marston, who while a graduate student at Harvard had experimented with a method for detecting deception by measuring systolic blood pressure, was appealed. In the 1923 case of Frye v. United States, the circuit court affirmed the trial judge’s ruling, stating that, “while courts will go a long way in admitting expert testimony deduced from a well-organized scientific principle or discovery, the thing from which the deduction is made must be sufficiently established to have gained general acceptance in the particular field in which it belongs . . . We think the systolic blood pressure deception test has not yet gained such standing.”

The Frye “general acceptance” test became the dominant rule governing the admissibility of scientific expert testimony for the next 70 years. Most courts refused to admit testimony about polygraph evidence, often with reference to Frye. (Marston, by the way, became prominent not only as a polygraph advocate but as the creator, in 1940, of the first female comic book action hero: Wonder Woman, who was known for the special powers of her equipment, including a magic lasso that “was unbreakable, infinitely stretchable, and could make all who are encircled in it tell the truth.”)

In 1993, in the Daubert case, the Supreme Court outlined the current test for the admissibility of scientific evidence in the federal courts. The Daubert test, codified in the Federal Rules of Evidence in 2000, requires trial court judges to act as gatekeepers and evaluate whether the basis for proffered scientific, technical, or other specialized knowledge is reliable and valid. Although Daubert replaced the general acceptance test of Frye, many states, including New York, California, Illinois, and Florida, continue to use Frye. Increasingly, however, courts in Frye jurisdictions are applying a hybrid test that incorporates much of the Daubert thinking. This thinking is consistent with the belief of most scientists that hypotheses gain strength from having survived rigorous testing.

Despite the consistency in basic outlook that evidence such as polygraph tests must be evaluated on the basis of its scientific merit, actual court decisions regarding polygraph use vary widely. In general, courts look at the admissibility of polygraph test results in several ways. Many courts, especially state courts, maintain a per se rule excluding polygraph evidence. They do so for reasons ranging from doubt about its scientific merit to concerns that its use would usurp the traditional jury function of assessing credibility. However, a significant number of jurisdictions that otherwise exclude polygraph evidence under a per se rule nonetheless allow the parties to stipulate to the admissibility of the evidence before the test is administered. These courts typically set requirements on matters such as the qualifications of the polygraph examiners and the conditions under which the tests are to be given. It is presumed that the stipulation makes the examinee take the test more seriously and leads to the selection of more impartial polygraph examiners, both factors that produce more accurate results. These assumptions have some commonsense appeal, but they are unsupported by research and don’t address whether the accuracy and reliability of neutral polygraph examinations are sufficient to permit them as evidence.

There is a troubling aspect to the practice of permitting the parties to stipulate to polygraph admissibility. Ordinarily, judges determine the existence of preliminary facts that are necessary to the admission of proffered evidence. That the parties are willing to stipulate to the admissibility of polygraph results should not free the judge from making the preliminary determination of validity. To be sure, parties regularly stipulate to the admissibility of evidence. But polygraph evidence is unique in that the stipulation occurs before the evidence—the polygraph result—exists. Because of the error rates of polygraph tests, courts should be reluctant to endorse stipulations that amount to little more than a calculated gamble.

Since Daubert, the biggest change in form, if not substance, in regard to polygraphs is the increased number of federal courts that articulate a discretionary standard for determining admissibility. The Ninth Circuit Court held in United States v. Cordoba that Daubert requires trial courts to evaluate polygraph evidence with particularity in each case. This decision does not appear, however, to have substantially changed the practice of excluding polygraph evidence. Federal courts still invariably exclude such evidence under Cordoba, pointing to high error rates and the lack of standards for administering polygraphs. Rule 403 of the Federal Rules, which provides for the exclusion of otherwise admissible evidence when its probative value is substantially outweighed by unfair prejudice, plays a prominent part in leading courts to exclude polygraph evidence. Courts regularly cite Rule 403 when noting the danger that polygraphs will infringe on the jury’s role in making credibility judgments, confuse the jury, or waste the court’s time.

Many jurisdictions outside of the purview of the Federal Rules now employ discretionary admittance tests. Possibly the most permissive jurisdiction is New Mexico, with its law that “entrusts the admissibility of polygraph evidence to the sound discretion of the trial court.” In Massachusetts, a Daubert state, trial courts have similar discretion to admit polygraph evidence, although with a significant caveat. The Massachusetts Supreme Judicial Court held that polygraph evidence is admissible only after the proponent introduces results of proficiency exams that indicate the examiner’s reliability.

Overconfidence in the polygraph presents a significant danger to achieving the objectives for which the polygraph is used.

Two main constitutional issues have arisen in courts’ decisions about admitting polygraph test results as evidence: the claim that excluding exculpatory polygraph results violates a defendant’s Sixth Amendment right to present evidence, and the claim that admission of inculpatory polygraph results violates a defendant’s Fifth and Fourteenth Amendment rights to due process. In general, courts have steered clear of the minutiae of polygraph research and have treated reservations regarding polygraph accuracy as not rising to constitutional dimensions. For example, in United States v. Scheffer in 1998, the Supreme Court upheld a military court rule that per se excludes polygraph evidence. The court said that exclusionary rules “do not infringe the rights of the accused to present a defense as long as they are not arbitrary or disproportionate to the purposes they are designed to serve.” According to the court, the per se rule has the aim of keeping unreliable evidence from the jury: The government’s conclusion that polygraphs were not sufficiently reliable was supported by the fact that “to this day, the scientific community remains extremely polarized about reliability of polygraph techniques.”

Constitutional questions also arise when defendants claim that admission of inculpatory polygraph results violate due process principles. Once again, courts generally find that the evidentiary standards applicable to polygraphs meet constitutional requirements. Courts have held, however, that the Fifth Amendment privilege against self-incrimination applies to the taking of a polygraph, and thus a defendant’s refusal to do so cannot be used against him or her. Moreover, courts carefully evaluate the waiver of a defendant’s right to counsel or right to remain silent in regard to stipulation agreements concerning polygraph examinations.

The perils of ambiguity

In the wake of controversy over allegations of espionage by Wen Ho Lee, a nuclear scientist at the Department of Energy’s Los Alamos National Laboratory, the department ordered that polygraph tests be given to scientists working in similar positions. Soon thereafter, at the request of Congress, the department asked the National Research Council (NRC) to conduct a thorough study of polygraph testing’s ability to distinguish accurately between lying and truth-telling across a variety of settings and examinees, even in the face of countermeasures that may be employed to defeat the test. Although the NRC was asked to focus on uses of the polygraph for personnel security screening, it examined all available evidence on polygraph test validity, almost all of which comes from studies of specific-event investigations.

The validity of polygraph testing depends in part on the purpose for which it is used. When it is used for investigation of a specific event, such as after a crime, it is possible to ask questions that have little ambiguity, such as “Did you see the victim on Monday?” Thus it is clear what counts as a truthful answer. When used for screening, such as to detect spies or members of a terrorist cell, there is no known specific event being investigated, so the questions must be generic, such as “Did you ever reveal classified information to an unauthorized person?” It may not be clear to the examinee or the examiner whether a particular activity justifies a “yes” answer, so examinees may believe that they are lying when providing factually truthful responses, or vice versa. Such ambiguity necessarily reduces the test’s accuracy. Validity is further compromised when tests are used for what might be called prospective screening (for example, with people believed to be risks for future illegal activity), because such uses involve making inferences about future behavior on the basis of information about past behaviors that may be quite different. For example, does visiting a pornographic Web site or lying about such activity on a polygraph test predict future sex offending?

These and other continuing concerns prompted the Department of Energy to ask the National Research Council (NRC) to conduct a thorough study of the validity of polygraph testing; that is, its ability to distinguish accurately between lying and truth-telling across a variety of settings and examinees and even in the face of countermeasures that may be employed to defeat the test. Although the NRC was asked to focus on uses of the polygraph for personnel security screening, it examined all available evidence on polygraph test validity, almost all of which comes from studies of specific-event investigations.

The NRC study, completed in 2003, examined the basic science underlying the physiological measures used in polygraph testing and the available evidence on polygraph accuracy in actual and simulated investigations. With respect to the basic science, the study concluded that although psychological states associated with deception, such as fear of being accurately judged as deceptive, do tend to affect the physiological responses that the polygraph measures, many other factors, such as anxiety about being tested, also affect those responses. Such phenomena make polygraph testing intrinsically susceptible to producing erroneous results.

To assess test accuracy, the committee sought all available published and unpublished studies that could provide relevant evidence. The quality of the studies was low, with few exceptions. Moreover, there are inherent limitations to the research methods. Laboratory studies suffer from lack of realism. In particular, the consequences associated with lying or being judged deceptive in the laboratory almost never mirrored the seriousness of these actions in the real-world settings in which the polygraph is used. Field studies are limited by the difficulty of identifying the truth against which test results should be judged and the lack of control of extraneous factors. Most of the research, in both the laboratory and in the field, does not fully address key potential threats to validity.

The study found that with examinees untrained in countermeasures designed to beat the test, specific-incident polygraph tests “can discriminate lying from truth-telling at rates well above chance, though well below perfection.” It was impossible to give a more precise estimate of polygraph accuracy, because accuracy levels varied widely across studies for reasons that could not be determined from the research reports.

For several reasons, however, estimates of accuracy from these studies are almost certainly higher than the actual polygraph accuracy of specific-incident testing in the field. Laboratory studies tend to overestimate accuracy, because laboratory conditions involve much less variation in test implementation, in the characteristics of examinees, and in the nature and context of investigations than arise in typical field applications. Field studies of polygraph testing are plagued by selection and measurement biases, such as the inclusion of tests carried out by examiners with knowledge of the evidence and of cases whose outcomes are affected by the examination. In addition, they frequently lack a clear and independent determination of truth. Because of these inherent biases, field studies are also highly likely to overestimate real-world polygraph accuracy.

To help inform policy discussions, the committee calculated the performance of polygraph tests with several possible accuracy indexes in hypothetical populations with known proportions of liars and truth-tellers. The committee’s conclusions were supported by beyond-the-best-case analyses that assumed a greater accuracy level than scientific theory or validation research suggested could be consistently achieved by field polygraph tests, even in specific-incident investigations.

The practical implications of any test accuracy level depend on the application for which the test is to be used. Table 1 shows beyond-the-best-case performance for polygraph tests in two hypothetical applications. In each case, the test is used in two ways. In “suspicious” mode, the test is interpreted strictly enough to correctly identify 80 percent of deceptive examinees; in “friendly” mode, it is interpreted to protect the innocent, so that fewer than half of 1 percent of the innocent examinees “fails.” In each case, we assume that 10,000 tests are given over a period of time.

Table 2. Expected results of a polygraph test procedure with better than state-of-the-art accuracy in a hypothetical population of 10,000 criminal suspects that includes 5,000 criminals.

In Table 1, a security screening application, we assume that only 10 of 10,000 examinees are guilty of a target offense, such as espionage. In the suspicious mode, the test identifies 8 of the 10 spies, but also falsely implicates about 1,598 innocent examinees. Further investigation of all 1,606 people would be needed to find the 8 spies. Someone who “failed” this test would have a 99.5 percent chance of being innocent. In the friendly mode, only about 39 innocent employees would fail the test, but 8 of the 10 spies would “pass” and be allowed to continue doing damage. The committee concluded that for practical security screening applications, polygraph testing is not accurate enough to rely on for detecting deception.

Table 2 summarizes criminal investigation applications in which only suspects are tested, and half of the suspects (5,000 of 10,000) are actually guilty. In the suspicious mode, the test correctly implicates about 4,000 of the guilty but falsely implicates about 800 of the innocent. Thus, almost 17 percent of those who “fail” the test are in fact innocent. In our judgment, “failing” such a polygraph test would leave reasonable doubt about guilt. In the friendly mode, only about 19 of 5,000 innocent people would “fail” the test, but about 4,000 of the 5,000 criminals would “pass.” Of those who “fail,” 98 percent would be guilty, but few criminals would fail.

Table 2. Expected results of a polygraph test procedure with better than state-of-the-art accuracy in a hypothetical population of 10,000 criminal suspects that includes 5,000 criminals.

Reasonable people may disagree about whether a test with these properties is accurate enough to use in a particular law enforcement or national security application. We cannot overemphasize, however, that the scientific evidence is clear that polygraph testing is less accurate than these hypothetical results indicate, even for examinees untrained in countermeasures. In addition, it is impossible to tell from the research how much less accurate the testing is. Accuracy in any particular application depends on factors that remain unknown.

Two justifications are offered for using polygraph testing as an investigative tool. One is based on validity: the idea that test results accurately indicate whether an examinee is telling the truth in responding to particular questions. The other is based on utility: the idea that examinees, because they believe that deception may be revealed by the test, will be deterred from undesired actions that might later be investigated with the polygraph or induced to admit those actions during a polygraph examination. The two justifications are sometimes confused, as when success at eliciting admissions is used to support the claim that the polygraph is a valid scientific technique.

On the basis of field reports and indirect scientific evidence, we believe that polygraph testing is likely to have some utility for deterring security violations, increasing the frequency of admissions of such violations, deterring employment applications from potentially poor security risks, and increasing public confidence in national security organizations. Such utility derives from beliefs about the procedure’s validity, which are distinct from actual validity or accuracy. Polygraph screening programs that yield only a small percentage of positive test results, such as the programs used in the Departments of Energy and Defense, might be useful for deterrence, eliciting admissions, and related purposes. This does not mean that the test results can be relied on to discriminate between lying and truth-telling among people who do not admit to crimes. Most people who lie about committing major security violations would “pass” such a screening test.

Overconfidence in the polygraph—a belief in its accuracy that goes beyond what is justified by the evidence—presents a significant danger to achieving the objectives for which the polygraph is used. In national security applications, overconfidence in polygraph screening can create a false sense of security among policymakers, employees in sensitive positions, and the general public that may in turn lead to inappropriate relaxation of other methods of ensuring security, such as periodic security reinvestigation and vigilance about potential security violations in facilities that use the polygraph for screening. It can waste public resources by devoting to the polygraph funds and energy that would be better spent on alternative procedures. It can lead to unnecessary loss of competent or highly skilled individuals in security organizations because of suspicions cast on them by false positive polygraph exams or because of their fear of such prospects. And it can lead to credible claims that agencies using polygraphs are infringing civil liberties while producing insufficient benefits to national security.

It may be harmless if a television show fails to discriminate between science and science fiction, but it is dangerous when government does not know the difference. In our work conducting the NRC study, we found that many officials in intelligence, counterintelligence, and law enforcement agencies believe that if there are spies, saboteurs, or terrorists working in sensitive positions in the federal government, the polygraph tests currently used for counterintelligence purposes will find most of them. Many such officials also believe that experienced examiners can easily identify people who use countermeasures to try to beat the test. Scientific evidence does not support any of these beliefs; in fact, it goes contrary to all of them.

It can also be dangerous if courts or juries are overconfident about polygraph accuracy. If jurors share the misunderstandings that are common among counterintelligence experts and television writers, they are likely to give undue credence to any polygraph evidence that may be admitted. The dangers are even greater as polygraph testing expands into forensic applications that are not subject to strong challenge in adversarial processes.

Polygraphs and polygraph-like test are used for a variety of other purposes, ranging from identifying fraudulent insurance claims to verifying the winner of a fishing contest. The use of the polygraph for interrogating foreign nationals in terrorism investigations and to verify information from informants, which is much in the news recently, often involves tests being given through translators. There is no scientific evidence supporting any of these uses, an the use of translators introduces additional questions about reliability.

Perhaps the most prevalent use of polygraphs that has emerged beyond those in criminal investigation and national security settings has been in post-conviction sex-offender maintenance programs, which are now required in more than 30 states. As part of their probation program in a typical jurisdiction, released sex offenders are required to submit to periodic polygraph examinations. This practice seems to have originated in the 1960s but became widespread only in the past decade or so.

Advocates for this use tout its accuracy in other settings, citing studies claiming between 96 and 98 percent accuracy in correctly identifying deception and suggesting that polygraph accuracy has improved with recent advances in technology. Both claims are inconsistent with the general evidence on polygraph accuracy. Regarding actual sex offending, we have been unable to locate a single controlled randomized trial or field trial in connection with polygraph testing with anything approaching credibility. Instead of developing serious validation efforts, the American Polygraph Association has acted as if all that is required is uniformity of process. The view, according to J. E. Consigli, who presented the association position in the Handbook of Polygraph Testing, is that “the utility use of polygraphs to elicit admissions and break through denial in sex offenders has demonstrated its necessity.”

Although the use of the polygraph to screen sex offenders may have utility for eliciting admissions, it is important to note the wide discretion given to polygraph examiners. There is a lack of uniformity in the types of polygraph examinations used in various jurisdictions, in terms both of questions used and of format. In terms of accuracy, there is no evidence that a “failed” polygraph test is an accurate indicator of concealed sex crimes. This is a particular concern because polygraph tests given in such settings often revolve around test questions that emphasize “sexually deviant” or “high-risk” behavior, such as the use of alcohol or drugs, sexual activity with a consenting adult, or “masturbation to deviant fantasy,” rather than on the detection of actual sex crimes or other violations of the terms of parole. Such testing is based on the presumption that legal but “undesirable” behaviors are indicators of illegal activity. Claims that polygraph testing is an effective and important management or treatment tool that lowers sexual and general criminal recidivism during supervision and treatment have no credible scientific basis.

Courts have been relatively permissive about the use of polygraphs in probation programs, viewing the evidentiary standards in such settings as different from those associated with the courts themselves. For example, in State v. Travis, the court found that, although the defendant’s agreement to a condition of probation requiring him to submit to a polygraph examination did not establish the admissibility of the results, it could still be used as grounds for the revocation of parole because he was uncooperative and resisted supervision.

The use of polygraph testing in a variety of settings has clearly proved to be extremely problematic. What, then, should be done? At a minimum, we need to continue to be wary about the claimed validity of the polygraph as a scientific tool, especially with regard to its current forensic uses. We believe that the courts have been justified in casting a skeptical eye on the relevance and suitability of polygraph test results as legal evidence. Generalizing from the available scientific evidence to the circumstances of a particular polygraph examination is fraught with difficulty. Further, the courts should extend their reluctance to rely on the polygraph to the many quasi-forensic uses that are emerging, such as the sex offender management programs. The courts and the legal system should not act as if there is a scientific basis for many, if any, of these uses.

Securing U.S. Radioactive Sources

The catastrophic attacks of September 11, 2001, and the anthrax mailings that took place shortly thereafter highlighted the nation’s vulnerability to unconventional forms of terrorism. One type of threat that has recently received close attention from policymakers and the news media is the potential for attacks with radiological dispersal devices (RDDs). Such weapons, which include so-called dirty bombs, are designed to spread radioactive contamination, causing panic and disruption over a wide area. The number and diversity of radioactive sources pose a serious security challenge, and the United States has yet to take all the necessary steps to strengthen controls to match the heightened terrorist threat.

Most people are aware of the danger of radioactive material associated with nuclear power, but the potential sources of material for an RDD include a large class of commercial radioactive sources used in medicine, industry, and scientific research. Of the millions of sources in use worldwide, only a small fraction, if maliciously employed in an RDD, are powerful enough to cause serious harm to human health. Yet this fraction still includes tens of thousands of sources of the type and quantity useful for a potent RDD. (See the end for information on the types of potentially high-risk sources within the United States.)

Although an RDD uses radioactive materials, even a very powerful RDD would cause far less damage than a nuclear weapon. The difference between a dirty bomb and a nuclear bomb is, as Graham Allison of Harvard University so eloquently put it, “the difference between a lightning bug and lightning.” Unlike a nuclear weapon, an RDD would likely cause few deaths from the direct effects of exposure to ionizing radiation. Nevertheless, many people could develop cancer over years or decades. And the costs of decontamination and, if necessary, rebuilding could soar into the billions of dollars, especially if an RDD attack occurred in a high-value urban setting. Moreover, terrorists detonating RDDs would try to sow panic by preying on people’s fears of radioactivity.

Although there have been no actual RDD detonations, two case studies involving radioactive sources point to the psychological, social, and economic damage that could result from an RDD. First, in 1987 in Goiania, Brazil, scavengers stole a powerful radioactive source (containing about 1,375 curies of radioactivity) from an abandoned medical clinic. Not realizing what it was, they broke the source open. Four people died, more than 100,000 others had to be monitored for contamination, and cleanup costs amounted to tens of millions of dollars. The second case study concerns the U.S. steel industry. Radioactive sources that found their way into scrap yards have accidentally been melted in steel mills 21 times, most recently in July 2001. Those contamination incidents have cost the steel industry an estimated quarter billion dollars. In response, the industry has installed radiation detectors in scrap yards, as well as at the entrances of and throughout mills. Such “defense in depth” appears to have reduced the frequency of incidents.

The sources that fall outside regulatory controls are not limited to those that show up in scrap yards. In the United States, a radioactive source is lost, stolen, or missing about once a day. Although most of those “orphan” sources are relatively weak, they could still cause panic and disruption if detonated in an RDD.

Federal agencies are now reviewing and revising their programs and policies to improve the security of radioactive sources against theft, diversion, and use in radiological terrorism. It is a challenging assignment: Regulatory responsibilities were fragmented enough before the new Department of Homeland Security was added to the mix. This is an appropriate time to review the origins of security practices and past problems with the security of radioactive sources in order to gain insights into whether current efforts to improve security are soundly based and properly directed. It is also essential to examine whether the United States has a well-developed, coordinated national plan for managing the risks of radiological terrorism.

Even before the terrorist attack of September 11, 2001, the Nuclear Regulatory Commission (NRC) had begun action to tighten controls on general-licensed radioactive sources, in particular those used in manufacturing and other settings, because disused sources were sometimes being found mixed with scrap metal. After September 11, the NRC requested that licensees undertake more stringent interim security measures. Although the details of the measures are sensitive information not intended to be published openly, we know that these security improvements were meant to increase security mainly at locations containing very highly radioactive material. The enhanced security entails, among other efforts, restricting access to radioactive material and coordinating the security efforts of licensees with local and federal law enforcement. The NRC’s security plans use as a starting point the results of a joint Department of Energy and NRC study identifying radioactive sources in the highest risk categories based on potential health effects resulting from radiation exposure. As we will explain, to realize maximum effectiveness the plans should incorporate a multifaceted approach that also includes cooperation and shared responsibility among government agencies, suppliers, and licensees.

Life cycle of radioactive sources

Developing a systematic plan requires understanding the stages of a source’s life cycle and the security measures in place at each stage. The first stage is the production of the radioisotopes that power radioactive sources. Such radioisotopes are made either in nuclear reactors or in particle accelerators. Reactor-produced radioisotopes present a greater security risk because they typically have longer half-lives and are generated in larger quantities. Government-required security standards are typically in place at the reactor sites, but the United States is not a leading producer of commercial radioisotopes at reactors.

The next stage involves placing radioisotopes into radioactive sources and manufacturing the equipment that will contain the sources. Major U.S. equipment manufacturers import most of their radioisotopes from foreign reactors. These companies are believed to protect their materials using the same industrial security measures that are applied to other high-value goods. After September 11, the NRC advised manufacturers to step up security. Sources are shipped to hospitals, universities, food irradiation facilities, oil well drilling sites, industrial radiography facilities, and other venues. Security practices vary according to the type of facility and activity. Food irradiation plants, which employ highly radioactive materials, probably have tighter security than hospitals, for example.

Security is of particular concern in the oil industry, which often transports radioactive sources across borders. Recently, a high-risk radioactive source was stolen from a major oil company in Nigeria. Of the more than 22,000 portable radioactive gauges in use, about 50 are reported stolen each year, according to the NRC. To prevent such thefts, the NRC announced in July 2003 that it is considering a new rule that would require portable gauges to be secured with at least two independent physical controls whenever they are left unattended.

Some radioactive sources pass through yet another phase if they are shipped overseas. Current U.S. regulations allow the import and export (except to the embargoed countries of Cuba, Iran, Iraq, Libya, North Korea, and Sudan) of most high-risk radioactive sources under a general license, meaning that the government is not required to conduct a detailed review of the credentials of the sender and recipient. The NRC is reportedly considering a proposed rule change to remedy this problem and has already instituted interim security measures. In March, during Operation Liberty Shield, licensees were requested to give the NRC at least 10 days’ notice of any shipment of highly radioactive sources. The commission is also working closely with U.S. Customs to develop a source-tracking database. Monitoring began in earnest in April, and preliminary data suggest that a few hundred shipments of highly radioactive sources enter or leave the United States every year.

A major vulnerability of the process for licensing radioactive sources is its susceptibility to fraud.

A radioactive source eventually becomes ineffective as its potency declines, but the “disused” source might still contain potent amounts of radioactivity. Ideally, users would dispose of or recycle such sources quickly, but because disposal is expensive and proper facilities are few, users often hold on to their sources. The risk of loss, theft, or seizure by terrorists goes up accordingly. Not all source manufacturers provide disposal or recycling services, so the government also must provide safe and secure disposal sites. As we explain below, the current disposal system in the United States is in dire need of repair.

Other issues attend the shipment of sources between stages of their life cycle. The U.S. Department of Transportation (DOT) regulates shipments within the United States, adjusting security measures according to the size of the shipment. Labeling and packaging requirements provide for the protection of transportation workers and bystanders both in routine transit and under accident conditions. DOT sets packaging specifications for small quantities of radioactive material, and the NRC is responsible for large quantities. Although the security measures for large, highly radioactive shipments are reportedly stringent, both industry and parts of the government have resisted implementing improved security efforts such as background checks of drivers and adequate arming of guards.

Alternative technologies

The International Commission on Radiological Protection and the congressionally chartered National Council on Radiation Protection and Measurements (NCRP) hold as a pillar of radiation protection the principle of justification. This principle calls for evaluating the risks and benefits of using a radioactive source for a particular application. Users are supposed to opt for a nonradioactive alternative if there is one that provides comparable benefit and less risk, including the risk associated with waste management.

The NRC has taken the position that advocating alternative technologies is not part of its mission. The commission’s reasons, which have not been explained, might be that it believes it is only in the business of regulating the radioactive sources that licensees choose to use, not the business of overseeing licensees’ decisions to use them. Nonetheless, it can be argued that the NRC’s charge from Congress—to protect public health, safety, and property as well as provide for the common defense and security—is sufficient to require the commission to adopt the principle of justification and, at least in principle, to encourage the consideration of alternative technologies. This is not to suggest that the NRC should second-guess licensees’ decisions to use radioactive sources, simply that the commission should ensure that licensees are making informed decisions that take into account justification and technological alternatives. Applying the principle of justification would reduce the number of radioactive sources in use and thus cut the risk of an RDD event occurring. The National Academy of Sciences, the International Atomic Energy Agency (IAEA), the NCRP, and the Health Physics Society have all recommended that users consider alternative technologies.

One U.S. industry that is adopting alternative technologies is steel, itself no stranger to the risks and costs of radioactive contamination. Steel mills use nuclear gauges to monitor the level of molten steel in continuous casters. If molten steel breaks through the casting system and strikes a gauge, the gauge housing and even the source could melt, causing contamination. Accordingly, mill operators are replacing nuclear gauges on continuous casters with eddy current and thermal systems, even though they are more expensive. The tradeoff—the cost of alternative technology versus the cost of contamination—makes the new systems a smart choice.

Some of the national laboratories are performing R&D to replace the most dangerous radioactive sources (those containing very dispersible radioactive compounds) with sources that pose less of a security hazard. Unfortunately, technology developed at the national labs is not readily available to the marketplace. At an IAEA conference on the radioactive source industry in April 2003, major source producers reportedly expressed interest in forming public-private partnerships to bring these alternative technologies to market. In the United States, such partnerships are sorely needed.

Licensing fraud

A major vulnerability of the process for licensing radioactive sources is its susceptibility to fraud. The first noteworthy U.S. case became public in 1996, when Stuart Lee Adelman pled guilty to fraudulently obtaining radioactive material and was sentenced to a five-year prison term. Adelman, also known as Stuart von Adelman, posed as a visiting professor at the University of Rochester and, illicitly using university resources, obtained licensed materials from suppliers. That was not his first such crime. In 1992, he was arrested in Toronto on a U.S. fugitive warrant and was found to have illegally obtained radioactive material and stashed it in a public storage locker.

Although no definite evidence points to terrorism in the Adelman case, an assistant U.S. District Attorney remarked that the radioactive material found in Canada may have been part of a scam to obtain money from terrorists. Adelman, who reportedly possessed a graduate degree in nuclear physics, had been employed by a state licensing agency and had worked as a radiation safety officer at two universities, illustrating the alarming potential for insider crime.

Thousands of high-risk disused radioactive sources throughout the United States have no clear disposal pathway and could therefore fall into the hands of terrorists.

Fraudulent licenses might also be used to import radioactive sources into the United States. In May 2003, Argentina’s nuclear regulatory agency red-flagged a request from a party in Texas for a teletherapy-sized shipment of radioactive cobalt. That quantity could provide enough radioactivity for a potent RDD. When the “license” presented to the supplier proved to be nothing more than a dental x-ray registration certificate, a concerned Argentine official contacted the Texas radiation control program. The FBI is investigating to determine whether the incident was a serious attempt to fraudulently import radioactive material, a hoax, or a test of the regulatory system.

Such incidents teach many lessons. First, fraud is not hypothetical; it is happening. Second, creation of a master list on the Web of regulatory authorities worldwide would facilitate communication among officials (in the Argentine case, the official had to search the Internet to find the relevant agency in Texas). The IAEA could host such a list. Third, regulatory agencies should exchange information about possible fraud and do so expeditiously. In the United States, information needs to flow more freely between the NRC, the Agreement States (33 U.S. states that regulate certain radioactive materials under NRC agreements), and other federal agencies. Internationally, such communication could be encouraged by the IAEA. Finally, suppliers should routinely verify requests for the purchase of large quantities of radioactive material.

Disposal

One of the most worrisome unresolved problems concerning the safety and security of radioactive sources in the United States is providing for adequate end-of-life cycle management. In principle, users can return their disused sources to manufacturers, transfer them to other users, store them, or ship them to government disposal sites. However, none of those methods is foolproof.

The option to return radioactive sources to the manufacturer is an acceptable disposal method to list on an application for a license, but manufacturers can and do go out of business. That has happened in the case of some teletherapy sources, which usually contain cobalt-60 in kilocurie quantities and hence meet NRC criteria for high security concerns. Owners of General Electric or Westinghouse teletherapy machines cannot return sources to the manufacturer, because neither company makes the machines any longer.

In the United States, the use of teletherapy units has declined as accelerators, which generate cancer-treating radiation without using a radioactive source, have replaced them. Some units have been abandoned or are in the possession of clinics that have gone bankrupt. In other cases, former users have exported their sources to countries that still use the technology. Many of the recipient countries are in the developing world, which taps into this secondhand market. As the IAEA has emphasized, more than half of the world’s nations, including most of the developing world, have inadequate regulatory controls. Thus, the secondhand market could pose increasing security risks.

Thousands of high-risk disused radioactive sources throughout the United States have no clear disposal pathway and could therefore fall into the hands of terrorists. Only three disposal sites for low-level waste—a classification that includes most disused radioactive sources—operate in the United States. These sites are located in Barnwell, South Carolina; Hanford, Washington; and Clive, Utah. The Clive disposal site can accept waste from all states, but it handles only the lowest-level waste. The other two sites are available only to certain states. Starting in 2008, when restrictions on access to the Barnwell site take effect, most states will have no disposal site for the bulk of their low-level radioactive waste. Unwanted radioactive materials will accumulate at hospitals, universities, and other facilities where they are vulnerable to loss, theft, or seizure by terrorists. Even when a disposal site is available, disused sources are often kept at relatively unsecured facilities for long periods because of high disposal costs. An estimated half million radioactive sources in the United States belong to this category, but only a small fraction of these sources could fuel potent RDDs.

Why does the country face this problem? In 1986, Congress passed the Low Level Radioactive Waste Policy Amendments Act, which placed responsibility for most low-level waste disposal on the states and gave the federal government responsibility for disposal of the higher-level waste. But because of strong resistance to siting low-level disposal facilities on states’ land, fewer commercial disposal sites are operating today than when Congress enacted the legislation. Another problem is that after 17 years, the federal government has yet to provide a permanent repository for the higher-activity waste. Indeed, it has only begun to make progress toward securing some of this radioactive material in temporary storage.

Although the federal effort got off to a slow start, the Off-Site Source Recovery (OSR) Project of the U. S. Department of Energy (DOE) has in recent years rounded up more than 7,000 disused sources, most of them radioactive enough to pose a security concern. Thousands more sources that remain to be secured are now registered on the OSR database. Despite the project’s relative success, a recent U.S. General Accounting Office audit found that the OSR suffers from a lack of DOE management support and that the project has not identified a pathway toward final disposal of the higher-activity waste. The project also faces an impending funding shortage: The supplemental support issued by Congress in October 2002 is slated to run out next year.

The United States lacks a comprehensive functioning national program for managing the radioactive waste generated from disused sources. As a step toward a solution, regional repositories could be established to securely store unwanted sources until their final disposal. Existing secure federal sites could be used for interim storage as a way to cut through the roadblocks that have stalled the states from siting disposal facilities. In parallel, the federal government can move toward a decision on final disposal. The states still have a crucial role to play by pressing the federal government to use its resources to correct the problem. New legislation must establish clear requirements for DOE to set up, without delay, safe and secure federal regional storage facilities for unwanted radioactive sources.

Managing the risks

Clearly, other measures are needed to close the gaps in the security of radioactive sources. The priority that the NRC assigns to the security requirements for different radioactive sources should take into account the kind of damage they would inflict if they were used in an RDD. The commission’s current priority system is based on preventing radiation injuries and deaths, which, in the case of a terrorist act using an RDD, are likely to be limited. The major consequences of an RDD would be psychosocial and economic, and the NRC’s system for prioritizing sources does not reflect that.

Although psychosocial effects can be difficult to quantify, there is an ample body of data on the economic effects. The cost for contaminated steel mills to shut down and clean up and dispose of radioactive waste averaged $12 million per event. Most of the sources that caused that damage would not have met NRC criteria for high priority. But radioactive sources that are less likely to cause radiation injuries or deaths are still quite capable of causing significant economic damage. The U.S. priority system for radioactive sources needs to be refined to account for such consequences.

Of course, the responsibility for improving radioactive source security does not fall solely on the U.S. government. Suppliers, users, and the states have a fundamental interest in closing gaps and, equally important, can contribute ideas to improve safety and security. Fostering partnerships that facilitate information exchange and cooperation among federal agencies, state regulators, suppliers, and states would do much to reduce the risk of radioactive sources going astray. Indeed, given the number and diversity of radioactive sources, cooperation and shared responsibility are the only way to achieve the highest level of security.

Radioactive Sources in the United States

Because the United States lacks a comprehensive national inventory of radioactive sources, their exact number is unknown. In 1998, Joel O. Lubenau and James G. Yusko estimated that some 2 million U.S. devices contained licensed radioactive sources. Some devices, such as radiography cameras and teletherapy units, contain a single source; others, such as large irradiators for sterilization and food preservation, some medical devices, and certain nuclear gauges, contain multiple sources. About a quarter of the devices, including those containing the largest sources, are used under a specific license, the rest under a general license. The absence of a national inventory also hinders reporting the number of radioactive devices or sources by category of use. The following table lists many of the more common practices using larger radioactive sources. The data are derived from IAEA-TECDOC-1344.

Practice Typical Radioisotopes Typical Radioactivity Amounts (Curies)
Radioisotope thermoelectric generators Strontium-90 20,000
  Plutonium-238 280
Sterilization & food irradiators Cobalt-60 4,000,000
  Cesium-137 3,000,000
Self-contained & blood irradiators Cobalt-60 2,400—25,000
  Cesium-137 7,000—15,000
Single-beam teletherapy Cobalt-60 4,000
  Cesium-137 500
Multibeam teletherapy Cobalt-60 7,000
Industrial radiography Cobalt-60 60
  Iridium-192 100
Calibration Cobalt-60 20
  Cesium-137 60
  Americium-241 10
High and medium dose rate brachytherapy Cobalt-60 10
  Cesium-137 3
  Iridium-192 6
Well logging Cesium-137 2
  Americium-241/beryllium 20
  Californium-252 0.03
Level and& conveyor gauges Cobalt-60 5
  Cesium-137 3—5

 

Forum – Fall 2003

Federal R&D: More balanced support needed

Thomas Kalil (“A Broader Vision for Government Research,” Issues, Spring 2003) is correct that the missions of agencies with limited capacities to support research could be advanced significantly if Jeffersonian research programs were properly designed and implemented for those agencies. However, certain goals and objectives, such as winning the War on Crime under the Department of Justice or guaranteeing literacy or competitiveness of U.S. students under the Department of Education, do not lend themselves to the type of approach used by the Defense Advanced Research Projects Agency.

The agencies with strong research portfolios mentioned by Kalil–Defense, Energy, the National Aeronautics and Space Administration (NASA), and the National Institutes of Health (NIH)– are fundamentally different from nonmission agencies such as Education and Justice. Research-oriented agencies do not think twice about using innovative ideas and technologies to solve problems. In addition, their research is usually driven by agency employees and by high-technology clients such as aerospace companies, who actually use the research results. The NIH similarly works with research-intensive companies and provides the basic research these companies need to bring new therapeutics, devices, and treatments to the marketplace.

In contrast, Education, Justice, Housing and Urban Development, the Environmental Protection Agency (EPA), and other nonmission agencies have structural disadvantages when it comes to Jeffersonian research. Unlike Defense or NASA, these agencies and their contractors are not as dependent on agency research results. In some agencies, such as EPA, new approaches can be stifled by regulation and litigation. It becomes quite a daunting task for decisionmakers in these agencies to be willing to risk part of their budgets on novel research and to try to apply this research to agency problems. Furthermore, these agencies pass much of their funding and implementation responsibilities to states and local governments. It is highly unlikely that states, already experiencing budget crises, will want part of their funds to be held at the federal level for research. Alternatively, the individual research programs in the states may be quite small and duplication hard to avoid. Also, transferring research results from an individual state to the rest of the nation is difficult.

Although we should encourage Jeffersonian research, we need to understand that a commitment to analyze problems and implement creative solutions must both precede and complement the research. Such a commitment to understand root causes of an agency’s most pressing problems and to find solutions to them should dramatically upgrade agency Government Performance and Results Act (GPRA) and quality efforts. Some problems will be improved through existing technologies, such as requiring state-of-the-art techniques in federally funded road construction; others will require study and research. Such a major refocusing of agencies will occur only with the strong support and willingness to change by the most senior administration and agency officials. I will be pleased when a new attitude toward innovation throughout the government permits us to get the most out of existing knowledge. Solving even bigger problems through Jeffersonian research will be an added bonus.

REPRESENTATIVE JERRY F. COSTELLO

Democrat of Illinois


Thomas Kalil makes a good point: The imbalance between the level of federal R&D resources invested in biomedical research and a few other favored areas and the resources devoted to other important priorities, such as education, poverty, and sustainable development, needs to be redressed.

The needs he cites–education, environment, and economic development–are not new. And neither is our R&D establishment. Why have we not done this already? Looking at a few instances in which we have redeployed our R&D resources to address a pressing national need may suggest some answers. Three such cases come to mind: space, energy, and homeland security. From the late 1950s to the mid-1960s, space R&D rose from almost nothing to a point where it absorbed nearly two-thirds of the nation’s annual civilian R&D expenditures. In the four years after the OPEC oil embargo of 1973, energy R&D ratcheted up by more than a factor of four, from $630 million to $2.6 billion a year. And within the past two years, R&D on homeland security and counterterrorism in the new Department of Homeland Security and other agencies has skyrocketed from $500 million a year to the vicinity of $3 billion.

Why has the nation been willing to boost R&D spending in these high-tech areas so sharply while virtually ignoring Kalil’s priorities? At least four factors seem important.

First, in each of the high-tech areas, a powerful national consensus on the importance and urgency of the issue developed, which translated rapidly into strong political support and, of course, funding. In contrast, although as a nation we spend a great deal on education, most of the money comes from state and local governments and most of it is needed to maintain our existing underfunded system. This and the other low-tech areas cited by Kalil lack an urgent national consensus.

Second, in space, energy, and homeland security, R&D has a well-defined role that is clearly and indisputably essential to the accomplishment of the mission. The role of R&D in the low-tech areas, although argued persuasively by Kalil, is not as obviously critical to success.

Third, in the high-tech areas, R&D capabilities, in some cases underutilized, were available to be redeployed. National labs and high-tech firms had relevant capabilities and were happy to apply them to new missions. These capabilities are not as obviously relevant and not as easy to redeploy to the low-tech areas.

Finally, in the high-tech areas, there was substantial agreement on the technical problems that needed to be solved: the development of practical spacecraft, of technologies to reduce dependence on imported oil, and of technologies to protect our infrastructure and our population from terrorist threats. No such agreement exists in the low-tech areas.

One can certainly appreciate Kalil’s frustration, but the problem he raises is even more daunting than the article suggests. Achieving a clear national consensus–not just lip service–on the importance and urgency of improving our educational system, protecting the environment, and pursuing sustainable development is a first step. Finding a technological path that will contribute to the solution of these problems and identifying the capabilities that can follow that path are also essential.

ALBERT H. TEICH

Director, Science & Policy Programs

American Association for the Advancement of Science

Washington. D.C.

[email protected]


Better rules for biotech research

John D. Steinbruner and Elisa D. Harris (“Controlling Dangerous Pathogens,” Issues, Spring 2003) capture well an issue that the scientific and national security communities are increasingly grappling with: the risk that fundamental biological research might spur the development of advanced, ever more dangerous, biological weapons. This discussion is very timely. Although concerns over the security implications of scientific research and communication are not new–cryptology, microelectronics, and space-based research have all raised such questions in the past–new times, new technologies, and new vulnerabilities mandate that we revisit them. The unprecedented power of biotechnology; the demonstrated interest of adversaries in both developing biological weapons and inflicting catastrophic damage; and our inability to “reboot” the human body to remedy newly discovered vulnerabilities, all call for a fresh look. Steinbruner’s and Harris’s interest is welcome, regardless of the ultimate reception of their specific proposal.

However, the formalized, legally binding approach they present faces significant challenges. Any such regulatory regime, even one largely based on local review, must operate on the basis of well-codified and fairly unambiguous guidelines that distinguish appropriate activities from dangerous ones. Such guidelines have proven very difficult to develop, perhaps because they are difficult to reconcile with the flexibility that is required to accommodate changing science, changing circumstances, and surprises. Had a list of potentially dangerous research activities been developed before the Australian mousepox studies that Steinbruner and Harris cite, it is not likely that mouse contraception would have been on it. Moreover, given the difficulty of anticipating and evaluating possible research outcomes, a formal system intended to capture only “the very small fraction of research that could have destructive applications” may still have to cast quite a broad net.

Fortunately, alternatives to a formal review and approval approach could still prove useful. Transparency measures that do not regulate research but rather seek to ensure that it is conducted openly can be envisioned, although they (like a more formal review system) might be difficult to reconcile with proprietary and national security sensitivities. Perhaps measures to foster a community sense of responsibility and accountability in biological research would have the greatest payoff: an ethic that encourages informal review and discussion of proposed activities, that heightens an obligation to question activities of other researchers that may seem inappropriate (and to answer questions others might have about one’s own work), and that engages in a continuing dialogue with members of the national security community.

Although the scientific community itself must lead this effort, government has a significant role to play. If some government research activity is deemed to require national security classification, processes and guidelines must nevertheless be developed to assure both foreign and domestic audiences that such work does not violate treaty commitments. The government’s national security community also has a responsibility to hold up its end of the dialogue with the scientific community. For example, it should provide a clear point of contact or center of expertise for scientists who ask for advice about the security implications of unexpected results.

If the technical and security communities are ultimately unable to arrive at formal or informal governance measures that look to be both useful (capable of significantly reducing security threats) and feasible (capable of being implemented at an acceptable cost), they still have an obligation to explain what measures were considered and why they were ultimately rejected. Scientists must not ignore or deny concerns about dangerous consequences or preemptively reject any type of governance or oversight mechanism. In the face of public concern, such an outcome could invite the political process to impose its own governance system, one that ironically would likely prove unworkable and ineffective but could nevertheless seriously impede the research enterprise.

GERALD L. EPSTEIN

Scientific Advisor

Advanced Systems and Concepts Office

Defense Threat Reduction Agency

Fort Belvoir, Virginia

[email protected]


Tighter cybersecurity

Bruce Berkowitz and Robert W. Hahn (“Cybersecurity: Who’s Watching the Store?” Issues, Spring 2003) give an excellent overview of the cybersecurity challenges facing the nation today. I would like to comment briefly on two of the policy options they propose to put on the table for improving security: regulation and liability.

Regulation could provide strong incentives for vendors and operators to improve security. I wonder, though, if the cost of regulation would be justified. Unless we can be confident of our ability to regulate wisely and efficiently, we could find ourselves spending vast sums of money without commensurate reductions in the threat, or unnecessarily constraining developments in technology. Given our limited understanding of the cost/benefit tradeoffs of different protection mechanisms, it would be hard to formulate sound mandatory standards for either vendors or operators. If we reach a point where these cost/benefit tradeoffs are understood, regulation may be unnecessary, as vendors and operators would have incentives to adopt those mechanisms shown to be worthwhile. Demand for security is also increasing, which provides another nonregulatory incentive. If regulation is pursued, it might best be limited to critical infrastructures, where cyber attacks pose the greatest concern.

Liability is also a difficult issue. For all practical purposes, it is impossible to build flawless products. An operating system today contains tens of millions of lines of code. The state of software engineering is not and likely never will be such that that much code can be written without error. Thus, if vendors are held liable for security problems, it should only be under conditions of negligence, such as failing to correct and notify customers of problems once they are known or releasing seriously flawed products with little or no testing. If liability is applied too liberally, it could not only inhibit innovation, as the authors note, but also lead to considerably higher prices for consumers as vendors seek to protect themselves from costly and potentially frivolous claims. Practically all cyber attacks are preventable at the customer end; it is a matter of configuring systems properly and keeping them up to date with respect to security patches. Vendors should not be responsible for flaws that customers could easily have corrected. That said, I am equally disturbed by laws such as the Uniform Computer Information Transactions Act, which lets vendors escape even reasonable liability through licensing agreements. We need a middle ground.

DOROTHY E. DENNING

Professor, Department of Defense Analysis

Naval Postgraduate School

Monterey, California


“Cybersecurity: Who’s Watching the Store?” is a very welcome appraisal of the nation’s de facto laissez-faire approach to battening down its electronic infrastructure. Authors Bruce Berkowitz and Robert W. Hahn’s examination of the National Strategy to Secure Cyberspace is timely and accurate in the assessment of its many shortcomings.

Less delicately stated, the strategy does nothing.

It is curious that it turned out this way, because one of its primary architects, Richard Clarke, had worked overtime since well before 2000 ringing alarm bells about the fragility of the nation’s networks. At times, Clarke’s message was apocalyptic: An electronic Pearl Harbor was coming. The power could be switched off in major cities. Cyber attacks, if conducted simultaneously with physical terrorist attacks, could cause a cascade of indescribable calamity.

These messages received a considerable amount of publicity. The media was riveted by such alarming news, but the exaggerated, almost propagandistic style of it had the unintended effect of drowning out substantive and practical debate on security. For example, how to improve after-the-fact, reactive, and antiquated antivirus technology on the nation’s networks, or what might be done about spam before it grew into the e-mail disaster it is now never came up for discussion. By contrast, there was always plenty of time to speculate about theoretical attacks on the power grid.

When the Bush administration’s Strategy to Secure Cyberspace was released in final form, it did not insist on any measures that echoed the urgency of the warnings coming from Clarke and his lieutenants. Although the strategy made the case that the private sector controlled and administered most of the nation’s key electronic networks and therefore would have to take responsibility for securing them, it contained nothing that would compel corporate America to do so.

Practically speaking, it was a waste of paper, electrons, and effort.

GEORGE SMITH

Senior Fellow

Globalsecurity

Pasadena, California

[email protected]


Risks of new nukes

Michael A. Levi’s points in “The Case Against New Nuclear Weapons” (Issues, Spring 2003) are well taken and his estimates generally correct, but some important additional points should be made.

“This argument [that the fallout from a nuclear explosion is less deadly than the biological or chemical fallout produced by a conventional attack] misses two crucial points,” Levi writes. Actually, it also misses a third crucial point: Chemical and biological agents in storage barrels in a bunker may not be reached by the heat and radiation from a nuclear explosion intended to destroy them in the time available before venting to the outside cools the cavity volume. How much is sterilized depends on the details of the configuration of the bunker and the containers, whether containers are buried separately in the earth, and other factors. The same is true of attacks with incendiaries and other methods. No guarantee of complete sterilization means that an unknown amount of agents could be spread along with the fallout. A study of the effectiveness of various attacks against a variety of bunker and other targets is needed to determine what happens (see ). Even such a study, given all the uncertainties attending the targets, is unlikely to lead to reliable assurance of complete sterilization.

According to the article, “two megatons . . . would leave unscathed any facilities buried under more than 200 meters of hard rock.” This estimate is probably conservative for well-designed structures. In addition, the devastation on the surface would make it difficult for anyone to access or communicate with the facility. Levi’s estimates of the effects of fallout also seem reasonable, given past experience with cratering explosions.

Levi criticizes the notion that “many enemies are so foreign that it is impossible to judge what they value.” Without getting into the details of Levi’s argument, the notion he criticizes is a myth, convenient if one wants to demonize an enemy but harmful if one is looking for ways of dealing with real-life problems. The so-called rogue states have been deterred by far lesser threats than nuclear weapons, invasion, and regime change. Their pattern of responses to incentives, positive and negative, is not very different from that of other states. That they oppose the United States does not make them mysterious.

“To claim a need for further engineering study of the robust nuclear earth penetrator is disingenuous,” he writes. Under some circumstances, a penetrating nuclear weapon could be effective against some targets while at the same time causing destructive surface effects. A study could determine just what the targets would be and how destructive the effects would be in specific circumstances. This is not an engineering study in the usual sense, however.

Two more points: One, to assume that the United States will be the only user of nuclear weapons is the well-known fallacy of the last move. Other weapons-capable countries could make effective use of nuclear weapons against the United States without attacking the U.S. homeland or indeed any city. U.S. forces and bases in U.S.-allied territories would be easy targets to destroy. If the United States uses nuclear weapons as tools of military operations, other countries may be empowered or driven to do the same.

The second point: As a veteran of both the Cold War arms race and Cold War arms control, I feel that we were a little wise and a great deal lucky. To toss around nuclear threats against regimes that are nuclear-capable but not sure of their survival overlooks the element of luck that has attended our nuclear age to date. We can certainly start a nuclear war, but we have no experience in limiting it.

MICHAEL MAY

Professor (Research) Emeritus

Engineering-Economic Systems and Operations Research

Center for International Security and Cooperation

Stanford University

Stanford, California

[email protected]


No nonlethal chemical weapons

Mark Wheelis has written an article of fundamental importance (“‘Nonlethal’ Chemical Weapons: A Faustian Bargain,” Issues, Spring 2003). He emphasises that nonlethal chemical weapons inevitably–and therefore predictably–carry a certain lethality when they are used; he offers as evidence the outcome of the Moscow theater siege in October 2002. He has also informed the current debate about whether use of such an agent in a domestic legal context would be a contravention of the 1993 Chemical Weapons Convention (CWC). His reasoning on the potential for misuse should sound an alarm bell in the minds of all proponents. In projecting into the future, he rightly indicates that the responsibility for ensuring that such agents are not developed, produced, transferred, or used rests with the country with the greatest military, biotechnology, and pharmaceutical power. Wheelis concludes that a “robust ethical and political system” is needed to prevent future deployment of such weapons.

Nonlethal chemical weapons are not envisioned as alternatives but as complements to the use of conventional weapons. This means that the legal debate should not hinge uniquely on whether such weapons are prohibited by the CWC; there are implications for other bodies of international law. If they were used in military conflicts, such weapons would serve only to increase the vulnerability of the affected people to other forms of injury. This is a serious consideration under international humanitarian law (the law of war). A fundamental premise of this body of law is that a soldier will recognize when an enemy is wounded or surrendering; in strategic contexts, neither would be easy if a chemical agent was used for incapacitation. It is then easy to see how such nonlethal agents when used in conjunction with other weapons would serve to increase the lethality of the battlefield. In a domestic context, the use of such agents in conjunction with other weapons brings human rights law into the picture in relation to whether the use of force in a given context is reasonable.

Wheelis has not referred to the time it takes for such agents to take effect. Would they really incapacitate people as proponents frequently claim? The medical literature reveals that opiate agents delivered by inhalation take some minutes rather than seconds to take effect. Even assuming delivery of a sufficient dose, incapacitation cannot be immediate. This is a serious disadvantage that proponents choose to ignore; much can happen in those minutes, including the execution of hostages or detonation of explosives. To sum up: Nonlethal chemical “knockout” agents do not exist.

In his Art of War, written 2,000 years ago, Sun Tzu observed, “Those who are not thoroughly aware of the disadvantages in the use of arms cannot be thoroughly aware of the advantages in the use of arms.” Do we have evidence that this observation is invalid?

ROBIN COUPLAND

Legal Division

International Committee of the Red Cross

Geneva, Switzerland


Against the backdrop of the Moscow theater catastrophe, Mark Wheelis argues the political practicality of U.S. leadership in order to prevent development of these arms. Achieving more robust controls requires eliminating the secrecy that surrounds research on incapacitating chemical weapons.

Wheelis’ concern that “short- term tactical considerations” will get in the way of good judgment is well-founded. Currently, federal officials are thwarting multilateral discussion about “nonlethal” chemical weapons and restricting access to unclassified information on government research. The National Academy of Sciences (NAS) itself has proven to be amenable to the government’s efforts to short-circuit debate.

Internationally, there have been two recent civil society efforts to secure a foothold on the international arms control agenda for incapacitating chemical weapons. Both have been quashed by the U.S. State Department, which brooks no discussion of the subject. In 2002, the Sunshine Project attempted to raise the issue at a meeting of the Chemical Weapons Convention (CWC) in the Hague. The U.S. delegation blocked our accreditation to the meeting. It was the first time that a nongovernmental organization had ever been banned from the CWC because of its arms control stance. Even more tellingly, the United States next prevented the International Committee of the Red Cross (ICRC) from speaking up at the CWC’s Review Conference earlier this year. U.S. diplomats were unable to bar the ICRC from the meeting, but did stop the international humanitarian organization from taking the floor and elaborating its concerns.

The State Department’s effort to stymie international discussion is paralleled inside the United States by the Pentagon’s work to prevent public disclosure of the full extent of its incapacitating chemical weapons work. One of the largest and most up-to-date troves of information about this research can be found in the Public Access File of the National Academies. Mandated by the Federal Advisory Committees Act (FACA), these files provide a public window on the activities of committees advising the government.

Between February and July 2001, the Joint Non-Lethal Weapons Directorate (JNLWD), run by the U.S. Marine Corps, placed dozens of documents on chemical and biological “non-lethal” weapons into the NAS public file. The documents contain key insights into federally sponsored research on incapacitating chemical weapons. NAS staff acknowledge that none of this material bears security markings. Yet the Academies are refusing to release the majority of the documents because of pressure from JNLWD. Legal experts of different political perspectives conclude that NAS stands in violation of FACA.

JNLWD deposited these documents because it was seeking an NAS green light for further work on chemical weapons. The Naval Studies Board’s remarkably imbalanced Panel on Non-Lethal Weapons obligingly produced a report endorsing further research– a report that demonstrated questionable command of treaty law. But the status of this study is in question, because FACA requires that NAS certify that the panel complied with FACA. It did not. NAS will not confirm or deny that

NAS’ spokespeople enjoy painting the Academies as caught between a rock (FACA) and a hard place (the Marine Corps), but that is imaginary. FACA unambiguously requires release of these important materials, although NAS’ amenability to the Pentagon’s protests may help the Academies remain in the good graces of the Marine Corps. The real victim of this unlawful suppression of public records is a healthy, informed, public and scientific debate over the wisdom of the accelerating U.S. research program on incapacitating chemicals.

EDWARD HAMMOND

Director, Sunshine Project

Austin, Texas

www.sunshine-project.org


National Academies’ response

Contrary to Hammond’s assertions, the National Academies’ study entitled “An Assessment of Non-Lethal Weapons Science and Technology” (available at www.nap.edu/catalog/10538.html) was conducted in compliance with Section 15 of the Federal Advisory Committee Act. Section 15 requires that certain material provided to the study committee as part of the information-gathering process be made available to the public. A number of documents provided by the U.S. government have not yet been made available pending a determination whether such material is exempt from public disclosure because of national security classification or one of the other exemptions under the Freedom of Information Act. The government has directed the National Academies not to release any particular document until the government has completed its review and made a recommendation on appropriate disposition. We have repeatedly asked the government to expedite its review of the remaining documents, and the government has replied that it continues to review documents and to notify the Academies of its recommendations. A number of such documents have been reviewed and made publicly available. We are committed to making available as soon as possible all remaining documents that qualify under the public access provisions of Section 15.

E. WILLIAM COLGLAZIER

Executive Officer, National Academy of Sciences

Chief Operating Officer, National Research Council


Mark Wheelis argues that interest in nonlethal chemical incapacitants for law enforcement or military operations represents bad policy. As an international lawyer, I would like to reinforce his concerns about signals the United States is sending to the rest of the world with regard to chemical incapacitants.

Wheelis correctly notes that the Chemical Weapons Convention (CWC) allows states parties to use toxic chemicals for law enforcement purposes, including domestic riot control. In the wake of the Moscow hostage incident, some CWC experts argued that the CWC only allows the use of riot control agents for domestic riot control purposes. Neither the text of the treaty nor the CWC’s negotiating history supports this restrictive interpretation of the law enforcement provision. This interpretation has been advanced out of fear that the law enforcement provision will become a fig leaf for military development of chemical incapacitants. Although this fear is understandable, the restrictive interpretation will not succeed in dealing with the problems created by the actual text of the CWC.

Wheelis acknowledges the problem that the law enforcement provision raises but tackles it through cogent arguments about the lack of utility that chemical incapacitants will have for law enforcement purposes. Legal rules on the use of force by law enforcement authorities in the United States support Wheelis’s position. As a general matter, legitimate uses of force by the police have to be highly discriminate in terms of the intended targets and the circumstances in which weapons can be used. The legal restrictions on the use of force by police (often enforced through lawsuits against police officers) strengthen Wheelis’s argument that chemical incapacitants would have little utility for law enforcement in the United States.

In terms of the military potential for chemical incapacitants, Wheelis is again correct in his reading of the CWC: The treaty prohibits the development of chemical incapacitants for military purposes. The CWC is clear on this issue, which underscores concerns that other governments and experts have about why the United States refused to allow the incapacitant issue to be discussed at the first CWC review conference in April and May of 2003. Only through diplomatic negotiations can the problems created by the law enforcement provision be adequately addressed under the CWC.

On the heels of the U.S. position at the review conference emerged revelations of a patent granted to the U.S. Army in 2003 for a rifle-launched grenade designed to disperse, among other payloads, chemical and biological agents. Although the U.S. Army claims that it will revise the patent in light of concerns raised by outside experts, the fact that the U.S. government granted a patent to cover things prohibited by not only international law but also U.S. federal law is disconcerting.

With the Bush administration blocking consideration of chemical incapacitants under the CWC, perhaps the time is ripe for members of Congress to scrutinize politically and legally the U.S. nonlethal weapons program.

DAVID P. FIDLER

Professor of Law and Ira C. Batman

Faculty Fellow

Indiana University School of Law

Bloomington, Indiana

[email protected]


Computer professionals in Africa

I read with interest G. Pascal Zachary’s article on the challenges facing Ghana and other African countries in developing an information technology sector to drive development (“A Program for Africa’s Computer People,” Issues, Spring 2003).

Statistics tell us that Africa has lost a third of its skilled professionals in recent decades. Each year, more than 23,000 qualified academic professionals emigrate to other parts of the world, a situation that has created one of the continent’s greatest obstacles to development. The result of this continual flight of intellectual capital is that Africa loses twice, with many of those leaving being the very people needed to pass on skills to the next generation.

Any attempt to stem the continuing hemorrhage of African talent must include presenting African professionals with an attractive alternative to leaving home. To keep Africans in Africa, it is critical to strengthen the motivation for them to be there and for those abroad to return home. Clearly, the ties of family, friends, and culture are not enough to negate the economic advantages of leaving.

For Africa to appeal as a socially, economically, and professionally attractive place to live and in order to realize the rich potential of Africa’s human resources, there is an urgent need for investment in training and developing the technological skills that developed countries have in abundance.

Zachary’s article shows that although the economic rewards of a job are important, people also want to work for a company that will offer them professional development. Investing in training, therefore, makes good business sense. Along with better training will come a greater encouragement of innovation, initiative, and customer service.

Clearly, the lack of opportunity for African programmers and computer engineers to network and share ideas with their counterparts in the West further increases their sense of isolation and technical loneliness, making the option of leaving for technologically greener pastures ever more tempting. As an alternative, the opportunity to increase their skills and share best practice in their chosen field should be given to them, minimizing the need for them to leave.

Although African economies need long-term measures, we can fortunately look to more immediate solutions to our skills crisis, as evidenced by the Ashesi University model. Interims for Development is another recent initiative designed to reverse the flow of skills and to focus on brain gain rather than brain drain. We assist companies in Africa by using the skills of professionals who volunteer to share their expertise through short-term training and business development projects.

Recently launched in Ghana and making inroads into other parts of Africa, the program marks a commitment by Africans living in the United Kingdom to work in partnership with Africans at home to provide the expertise necessary for the long-term technological progress and competitiveness of Africa in the global marketplace.

Through initiatives such as Ashesi and Interims for Development, the task of driving Africa’s technological development can be accelerated successfully, thereby creating the necessary conditions to keep Africa’s talent where it belongs.

FRANCES WILLIAMS

Chief Executive

Interims for Development

London, England

[email protected]


GM crop controversies

In “Reinvigorating Genetically Modified Crops” (Issues, Spring 2003), Robert L. Paarlberg eloquently describes the political difficulties that confront the diffusion of recombinant DNA technology, or gene splicing, to agriculture in less developed countries. He relates how the continued globalization of Europe’s highly precautionary regulatory approach to gene-spliced crops will cause the biggest losers of all to be the poor farmers in the developing world, and that if this new technology is killed in the cradle, these farmers could miss a chance to escape the low farm productivity that is helping to keep them in poverty.

Paarlberg correctly identifies some of the culprits: unscientific, pusillanimous intergovernmental organizations; obstructionist, self-serving nongovernmental organizations; and politically motivated, protectionist market distortions, such as the increasing variety of European Union regulations and policy actions that are keeping gene-spliced products from the shelf.

Although Paarlberg’s proposal to increase public R&D investment that is specifically tailored to the needs of poor farmers in tropical countries is well intentioned, it would be futile and wasteful in the present climate of overregulation and inflated costs of development. (The cost of performing a field trial with a gene-spliced plant is 10 to 20 times that of a trial with a plant that has virtually identical properties but was crafted with less precise and predictable techniques.) Likewise, I fail to see the wisdom of increasing U.S. assistance to international organizations devoted to agricultural R&D, largely because, as Paarlberg himself describes, many of these organizations—most notably the United Nations (UN) Environment Programme and Food and Agriculture Organization—have not been reliable advocates of sound science and rational public policy. Rather, they have feathered their own nests at the expense of their constituents.

The rationalization of public policy toward gene splicing will require a systematic and multifaceted solution. Regulatory agencies need to respect certain overarching principles: Government policies must first do no harm, approaches to regulation must be scientifically defensible, the degree of oversight must be commensurate with risk, and consideration must be given to the costs of not permitting products to be field-tested and commercialized.

The key to achieving such obvious but elusive reform is for the U.S. government to begin to address biotechnology policy in a way that is consistently science-based and uncompromising. Perhaps most difficult of all, it will need to apply the same remedies to its own domestic regulatory agencies. Although not as egregious as the Europeans, the U.S. Department of Agriculture, Environmental Protection Agency, and Food and Drug Administration also have adopted scientifically insupportable, precautionary, and hugely expensive regulatory regimes; they will not relinquish them readily.

At the same time that the U.S. government begins to rationalize public policy at home, it must punish politically and economically those who are responsible for the human and economic catastrophe that Paarlberg describes. We must pursue remedies at the World Trade Organization. Every science and economic attache in every U.S. embassy should have biotechnology policy indelibly inscribed on his diplomatic agenda. Foreign countries, UN agencies, and other international organizations that promulgate, collaborate, or cooperate in any way with unscientific policies should be ineligible to receive monies (including UN dues payments and foreign aid) or other assistance from the United States.

There are no guarantees that these initiatives will lead to more constructive and socially responsible public policy, but their absence from the policymaking process will surely guarantee failure.

HENRY I. MILLER

The Hoover Institution

Stanford, California

[email protected]


Heated debates over the European Union’s (EU’s) block of regulatory approvals of genetically modified (GM) crops since 1998 and the recent refusal of four African countries to accept such seeds as food aid from the United States make a point: Governments can no longer regulate new technologies solely on the basis of local standards and values.

Robert L. Paarlberg sums up the situation leading to the African refusal of U.S. food aid as follows: The EU’s de facto moratorium and its legislation mandating the labeling of foods derived from GM crops prevent the adoption of GM crops in developing countries. Paarlberg lists channels through which Europe delays the adoption of this technology in the rest of the world: Intergovernmental organizations, nongovernmental organizations (NGOs), and market forces. Intergovernmental organizations such as the United Nations Food and Agricultural Organization, under the influence of the EU, hesitate to endorse the technology. Their lack of engagement in building the capacity to regulate biotechnology in developing countries is a symptom of this hesitancy. European-based environmental and antiglobalization NGOs stage effective campaigns against GM crops in the EU and developing countries that have adverse impacts on regulation. The lack of demand for GM crops in global markets is largely attributed to the EU and Japan, because they import a large share of traded agricultural commodities.

Paarlberg suggests that the U.S. government consider three types of remedial action: support of public research to develop GM crops specifically tailored to the needs of developing countries, promotion of the adoption of insect-protected cotton, and emphasis on the needs of developing countries in international negotiations.

Although it is correct in some points, Paarlberg’s analysis is limited by its assumption that the minimalist U.S. approach to regulation, which does not support general labeling schemes for foods derived from GM crops, is the right approach for everyone. A more differentiated analysis of alternative motivations for the activities of intergovernmental organizations and NGOs and in markets challenges this assumption.

Paarlberg criticizes intergovernmental organizations’ emphasis on regulatory capacity-building as a sign of limited support for biotechnology. In my view, regulatory capacity-building is a prerequisite for the adoption of GM technology, in particular in developing countries. Local knowledge, skills, and institutional infrastructures are needed for the assessment and management of risks that are specific to crops, the local ecology, and local farming practices. For example, the sustainable use of insect-protected crops requires locally adapted insect-resistance management programs. Governments and intergovernmental organizations can create invaluable platforms for local farmers, firms, extension agents, and regulators to jointly develop and implement stewardship plans to ensure sustainable deployment of the technology.

Paarlberg’s analysis of the influence of NGOs focuses on the activities of extremists. He neglects the influence of more moderate organizations representing the interests of the citizen-consumer (consumer groups have more influence in the European Commission since the creation of the Directorate General dedicated to health and consumer protection in 1997). Consumer organizations in the EU and the United States consider mandatory labeling a prerequisite for building public trust in foods derived from GM crops. This position reflects consumer interest in product differentiation and a willingness to pay for it by those with purchasing power. The EU’s (and in particular the United Kingdom’s) aim to stimulate public debate and to label foods derived from GM crops may delay their adoption in the short run. Arguably, the specifics of current proposals to label all GM crop—derived foods are difficult to implement. But the premise of public debate and labeling holds the promise of contributing to consumers’ improved understanding of agrofood production in the long run. This is desirable so as not to further contribute to our alienation from the foods we eat.

Paarlberg explains the lack of global market demand market for GM crops in terms of Europe’s significant imports of raw agricultural commodities (28 billion euros in 2001). However, Europe’s even more significant exports of value-added processed food products (45 billion euros in 2001) escape his analysis. Paarlberg attributes the lack of uptake of foods derived from GM crops by European retailers and food processors to stringent EU regulations. However, retailers offering their own branded product lines had strong incentives for banning foods derived from GM crops, and large multinational food producers such as Nestlé and Unilever followed suit. Major U.S. food producers, including Kraft and FritoLay, also decided to avoid any GM corn—containing corn sources after Starlink corn, approved for animal feed but not for food use, was found in corn-containing food products.

Government approaches to the regulation of GM crops should be built on an understanding of the values and interests of stakeholders in the global agrofood chain (including consumers). International guidelines are required on how to conduct integrated technology assessments analyzing potential beneficial and adverse economic, societal, environmental, and health impacts of GM foods in different areas of the world. Such assessments may also highlight common interests in improved coordination in the development of regulatory frameworks across jurisdictions.

ARIANE KOENIG

Senior Research Associate

Harvard Center for Risk Analysis

Harvard School of Public Health

Boston, Massachusetts

[email protected]


The current polarization of the transatlantic debate on the use of genetically modified organisms (GMOs) in agriculture is related to the increasing political influence of nongovernment organizations (NGOs) in politics. Although corporate interests have a strong influence on U.S. policy, NGOs increasingly shape agricultural biotechnology policy in the European Union (EU) and its member states. The escalating trade dispute between the United States and the European Union over GM crops involves strong economic and political interests, as well as the urge to make gains on moral grounds in order to attain the public’s favor and trust. As a consequence, developing countries are increasingly pressured by governments, NGOs, and corporations to take sides on the issue and justify their stance from a moral point of view.

In this context, the predominantly preventive regulatory frameworks on GMOs adopted in developing countries seem to indicate that the anti-GMO lobbying of European stakeholders has been far more successful than the U.S. strategy of promoting GM crops in these countries. Robert L. Paarlberg attributes the current dominance of preventive national regulatory frameworks worldwide to the EU’s successful lobbying through multilateral institutions, but I believe that NGOs have also played a critical role by taking advantage of the changing meaning of “risk” in affluent societies. Risk, a notion that has its origin in science and mathematics, is increasingly used in public to mean danger from future damage, predicted by political opponents or anyone who insists that there is a high degree of uncertainty about the risks of a particular technology.

Political pressure is not against taking risks but against exposing others to risks. Since the United States is the global superpower and engine of technological innovation, most of its political decisions may produce risks that are likely to affect the rest of the world as well. As a consequence, the United States is inevitably perceived as an actor that imposes risks on the rest of the world. This politicization of the notion of risk is happening not just in Europe but also in developing countries. In addition, European state and nonstate actors are often encouraging people in developing countries to adopt such a politicized view of risk, not because of a genuine resentment against the United States but because of the hope of gaining moral ground in politics at home. Politicians and corporations in the United States counter this strategy with the equally popular strategy of blaming the Europeans for being against science and free trade and being responsible for world hunger.

The results of three stakeholder perception surveys I conducted in the Philippines, Mexico, and South Africa confirmed the increasing influence of Western stakeholders on the most vocal participants in the public debates on agricultural biotechnology in developing countries. Yet the broad majority of the survey participants in these countries turned out to have rather differentiated views of the risks and benefits of genetic engineering in agriculture, depending on the type of crop and the existing problems in domestic agriculture. They confirm neither the apocalyptic fears of European NGOs nor the unrestrained endorsements of U.S. firms.

The polarization of the global GMO debate produced by the increasing influence of nonstate actors in politics may turn out to be of particular disadvantage to people in developing countries, because Western stakeholders barely allow them to articulate their own differentiated stances on GMOs. Moreover, polarization prevents a joint effort to design a global system of governance of agricultural biotechnology to minimize its risks and maximize its benefits, not just for the affluent West but also for the poor in developing countries. The only way to end this unproductive polarization is to become aware that the debate is often not about science but about public trust. Therefore, public leadership may be needed that is not just trying to preserve public trust within its respective constituency but has the courage to risk a short-term loss of public trust within its own ranks in order to build bridges across different interests and world views; for example, by sometimes showing an unexpected willingness to compromise. This is the only way to restore public trust as a public good that benefits everybody and cannot be appropriated

PHILIPP AERNI

Senior Research Fellow

Center for Comparative and International Studies

Swiss Federal Institute of Technology (ETH)

Zurich, Switzerland

[email protected]


Robert L. Paarlberg hits many of the reasons surrounding the controversy over genetically modified (GM) crops in the European Union (EU) and in developing countries squarely on the head and offers some well-reasoned and sensible suggestions for how the United States can regain the high ground in this debate.

Since the article was written, the United States has notified the World Trade Organization (WTO) that it intends to file a case against the EU moratorium on GM crops. This rather provocative U.S. action has not precluded some of the options Paarlberg suggests, but it does make implementing them more difficult.

In his explanation of why GM plantings have been restricted to four major countries (the United States, Canada, Argentina, and Brazil), Paarlberg cites the globalization of the EU’s precautionary principle. This pattern also reflects the choices made by the biotech companies. For sound economic and technical reasons, they have focused their resources on commercial commodities, such as soybeans, canola, corn, and cotton. Although these crops are certainly grown elsewhere, these four countries are major commercial producers of these crops. In addition, modifying these crops through genetic engineering is easier than modifying other crops, such as wheat, a biotech version of which is only now on the verge of commercialization.

Paarlberg very astutely points out that the U.S. government has often ignored international governmental organizations, such as the Codex Alimentarius, whereas the EU has invested significant resources in maintaining an influence there. I think the reasons go beyond those he cites. For example, some time ago there was a proposal to label soybean oil as a potential allergen. It did not occur to the U.S. government officials monitoring the Codex portfolio that a label indicating that a product “contained soybean oil” might have implications for U.S.-EU trade and in particular might be linked to the biotech debate. The U.S. officials tasked with following the Codex Alimentarius approached the issue from the perspective of food safety and science, not of trade policy and politics. So it is not simply a matter of resources, it is also a matter of perception.

Paarlberg is right to point out that the EU’s “overregulation” of biotech is in response to regulatory failures, such as foot and mouth disease, mad cow disease, tainted blood products, and dioxin. But there is another dimension: the tension between national governments and the European Commission. Most of those regulatory failures occurred first at the national level. It is probably not a coincidence that two of the countries most opposed to biotechnology—the United Kingdom and France—have been the countries most tied to regulatory failures.

There is another political player that Paarlberg doesn’t mention: the European Parliament. The emergence of the biotech issue coincided with the emergence of the Parliament as a political force in the EU. European Parliamentarians, like all elected representatives, are extremely responsive to their constituents—at least the vocal ones. Because of the co-decision process, in addition to considering the most extreme positions of individual member states, the European Commission must take into account the positions of the European Parliament, composed of a dozen different political parties that cut across national boundaries.

Paarlberg rightly points out that the biotech industry’s assumption that GM crops would become pervasive if they were grown widely by the United States and other major exporters was flat wrong. Some years ago, I attended a meeting in which a food company regulatory expert asked a biotech company executive: “What if consumers don’t want biotech maize?” The answer: “They won’t have a choice.” The current debate over the introduction of biotech wheat, in which farmers, flour mills, and food companies have asked biotech companies to delay commercialization until consumer acceptance increases, indicates the huge change in the balance of power between biotech companies, farmers, processors, and food companies.

Although I agree with Paarlberg’s recommendations for reinvigorating GM crops, I am not sanguine about the prospects for any of them. Having filed its case against the EU, I’m afraid that any U.S. efforts to expand its international influence will be seen as a cynical ploy to buy friends in the WTO.

M. ANN TUTWILER

President

International Food & Agricultural Trade Policy Council

Washington, D.C.

[email protected]


Robert L. Paarlberg offers an interesting analysis of the genetically modified organism (GMO) problem, though I suspect he may be underestimating the role that health and environmental concerns played in the decision made by some African countries to reject GMO food aid. African policymakers have expressed concern that although GMO maize may be safe in the diet of Americans and other rich people who eat relatively small amounts in their total diet, the effects could conceivably be different among poor Africans who subsist almost entirely on maize. Moreover, if farmers were inadvertently to plant GMO maize as seed in tropical environments rich in indigenous varieties of maize, there is a risk of contaminating and losing some of these valued varieties.

I agree with Paarlberg’s recommendation that the future of biotechnology in developing countries most likely rests with publicly funded research. The multinational seed companies are not likely to pioneer new products for poor farmers in the developing world, except in those few cases where there is a happy coincidence between the needs of farmers in rich as well as poor countries (such as Bt cotton). On the other hand, publicly funded research does not all have to be undertaken by the public sector, and there are lots of interesting possibilities for contracting out some of the upstream research to private firms and overseas research centers.

But even good publicly funded research will not help solve Africa’s problems unless some other constraints are overcome too. Few African countries have effective biosafety systems and most are not therefore in a position to enable biotechnologies to be tested in farmers’ fields and eventually released. Even if African countries were to settle for biosafety regulation systems that are more appropriate to their needs than Europe’s, it will still take some years to build that capacity and to ensure effective compliance. In the meantime, biotechnology research cannot contribute to solving Africa’s food problems in the way Paarlberg envisages. Given also the long lead times involved in all kinds of genetic improvement work, including biotechnology (often 10 to 15 years), the Food and Agriculture Organization of the United Nations is probably correct in concluding that biotechnology is not likely to be a major factor in contributing to the 2015 Millennium Development Goals.

Further complicating the issue is the matter of intellectual property. Already much genetic material and many biotechnology methods and tools have been patented, and this is imposing ever-increasing contractual complexity in accessing biotechnology. The public sector is not exempt from these requirements and in fact is increasingly finding that it needs to take out patents of its own to protect its research products. Public research institutions in developing countries are not currently well positioned to cope with these complexities, yet will find them hard to avoid once the TRIPS (Trade-Related Aspects of Intellectual Property Rights) agreement becomes effective.

Finally, if Europe persists with its labeling requirements, Paarlberg is correct in concluding that African countries will effectively have to limit biotechnology to those crops that are never likely to be exported to Europe.

PETER HAZELL

Director, Development Strategy and Governance Division

International Food Policy Research Institute

Washington, D.C.

[email protected]


Sign the Mine Ban Treaty

Richard A. Matthew and Ted Gaulin get much of it right in “Time to Sign the Mine Ban Treaty” (Issues, Spring 2003) in terms of both the issues surrounding the Mine Ban Treaty (MBT) and the larger context of the pulls on U.S. foreign policy between “the muscle-flexing appeal of realism and the utopian promise of liberalism.”

Although many have, I have never bought the United States’ Korea justification for not signing the treaty. I believe this to be another example of the military being unwilling to give up anything for fear of perhaps being compelled to give up something more. In 1994, then-Secretary of the Army General Gordon Sullivan wrote to Senator Patrick Leahy, an MBT champion, that if Leahy succeeded in the effort to ban U.S. landmines he would put other weapons systems at risk “due to humanitarian concerns.”

The U.S. military was also opposed to outlawing the use of poison gas in 1925. In that case, they were overruled by their commander in chief, who factored the military concerns into the broader humanitarian and legal context. Unfortunately, this has not been the case with landmines. Although President Clinton was rhetorically in support of a ban, he abdicated policy decisions to a military around which he was never really comfortable.

Under the current administration, I believe that the unilateralist muscle-flexing side of U.S. foreign policy has crushed much hope of meaningful support for multilateralism and adopting a policy of greater adherence to international law as a better solution than military force to the multiple problems facing the globe. The administration’s management of the situation leading up to the Iraq operation and its attitude since leave no room for doubt about that.

In current U.S. warfighting scenarios, high mobility is critical. In such fighting, landmines can pose as much risk to the movements of one’s own troops as to those of the enemy. The authors rightly point out that other U.S. allies, with similar techniques and modern equipment such as the British, have embraced the landmine ban. But it is also interesting to note that militaries around the world that could never contemplate approaching U.S. military superiority have also given them up—without the financial or technological possibility of replacing them.

The authors note that the benefit of the United States joining the MBT far outweighs the costs of giving up the weapon. They also describe the two priorities that have guided U.S. policy since World War I: creating a values-based world order and the preservation of U.S. preeminence through military dominance. I believe that the fact that the United States will not sign the MBT, given the cost-benefit analysis described by the authors, only underscores the fact that the current priority dominating U.S. policy is not simply to preserve U.S. dominance but to ensure that no power will ever rise again, like the former Soviet Union did, to even begin to challenge it.

JODY WILLIAMS

Nobel Peace Laureate

International Committee to Ban Landmines

Washington, DC


U.S. foreign policy vacillates between liberalism and realism. On the one hand, the United States has exerted much effort to create multilateral regimes that address collective problems and promote the rule of international law. On the other hand, it has been hesitant to endow such institutions with full authority because, like many states, it sees itself as the best guarantor of its own self-interest. In this latter sense, it uses its power to shape world affairs as it sees fit. Successive administrations have weighed heavily on one side or the other, and even within administrations this tension tends to define foreign policy debate and practice.

Richard A. Matthew and Ted Gaulin explain the U.S. position toward the ban on antipersonnel landmines (APLs) in this way. Rhetorically, the U.S. espouses the promise of a world free of APLs but in practice refuses to take the essential step for creating that world by refusing to sign the international Mine Ban Treaty (MBT).

Matthew and Gaulin want to show that U.S. foreign policy doesn’t have to operate this way. They demonstrate that the tension between liberalism and realism, as it expresses itself in the MBT decision (and, by extension, other areas) is a false one. They point out that supporting the MBT speaks to the U.S. desire for both greater multilateralism and maximization of its self-interest. APLs serve little military value for the United States, and signing the MBT would enhance U.S. security interests.

This type of argument—couching multilateralism in the language of self-interest—is a powerful form of commentary and is arguably the central motif in the tradition of advice to princes. It requires, however, an accurate perception of a state’s self-interest. Matthew and Gaulin present a compelling case for what is in the United States’ self-interest: By signing the MBT, which already enjoys 147 signatures, it would win greater multilateral support in advancing U.S. security by combating terrorism. I’m unsure, however, if the current administration shares this view of U.S. interest.

The Bush administration has never appreciated multilateralism. In fact, it sees efforts to promote multilateralism as a form of weakness. It believes that the United States does best when it states its objectives, demonstrates its strength, and leaves the world with the choice of either falling into line (multilateralism of the so-called “willing”) or facing U.S. rebuke. In contrast to some earlier administrations, the current one does not see the United States as the architect of international regimes and law or even world order. Architects build structures that they themselves have to live in. Why get caught up in all the rules and expectations that pepper international life? That is what weak powers must do; not the strong. It is so much easier—and, to the Bush administration, much more gratifying—to act on one’s own. The United States sees itself as a hegemon who can do what it wants whenever it pleases.

So when Matthew and Gaulin demonstrate that the United States can advance its self-interest by signing the MBT and enhancing multilateralism, one is tempted to say, “great idea, wrong world.” The age when U.S. administrations see their own interests as linked inextricably to those of others is gone. This will mean finding new ways of arguing for the United States to do the right thing, like sign the MBT.

PAUL WAPNER

American University

Washington, D.C.

[email protected]


I was pleased to see the article by Richard A. Matthew and Ted Gaulin. Indeed, it is long past time for the United States to join the 1997 Mine Ban Treaty, which prohibits the use, trade, production, and stockpiling of antipersonnel landmines. To date, 147 nations have signed the treaty and 134 have ratified it, including all of the United States’ major military allies.

Citing U.S. military reports and statements by retired military leaders, Matthew and Gaulin correctly point out that “the utility of antipersonnel landmines . . . should not be overstated.” In a September 2002 report on landmine use during the 1991 Persian Gulf War, the General Accounting Office stated that some U.S. commanders were reluctant to use mines “because of their impact on U.S. troop mobility, safety concerns, and fratricide potential.” Thus, it is not surprising that there have been no official media or military reports of U.S. use of antipersonnel landmines since 1991.

Those of us in the mine ban movement were then quite surprised to read the assertion by Matthew and Gaulin that U.S. special forces “regularly deployed self-deactivating antipersonnel landmines” in Afghanistan in 2001. When asked, the authors indicated that their sources were confidential, and that reports to them of antipersonnel mine use in Afghanistan were anecdotal.

Reports from deminers and the media in Afghanistan and Iraq indicate that although U.S. troops did have antipersonnel mines with them, they did not use them, and demining teams have not found them. Aside from humanitarian and political concerns, this is also consistent with the military’s battle plans. Taliban forces did not present a serious vehicle threat, which today’s U.S. mixed mine systems are meant to combat. Also, mixed mines are heavy and thus not typically carried by special operations forces.

Given the serious nature of this issue, it is important for us to search for the truth and not to rely on unsubstantiated sources. If and until it is proven that U.S. troops have used antipersonnel mines in Iraq or Afghanistan or anywhere else, it is irresponsible and unproductive for any of us to assert that they did.

On most of Matthew’s and Gaulin’s points, however, we are in enthusiastic agreement. They aptly describe the political and humanitarian costs to the U.S. government of espousing “the ideal of a mine-free world while refusing to take the most promising steps toward achieving it.” Landmines annually kill and maim 15,000 to 20,000 people, mostly innocent civilians, and they threaten the everyday life of millions more living in mine-affected communities. By remaining outside of the global norm banning this indiscriminate menace, the U.S. government gives political cover to countries such as Russia, India, and Pakistan that have laid hundreds of thousands of antipersonnel mines in recent years with devastating consequences for civilians. We hope the Bush administration will demonstrate military and humanitarian leadership, as well as multilateral cooperation, by joining this international accord.

GINA COPLON-NEWFIELD

Coordinator

U.S. Campaign to Ban Landmines

Boston, Massachusetts

[email protected]


Saving scientific advice

I share Frederick R. Anderson’s concerns (“Improving Scientific Advice to Government,” Issues, Spring 2003) that recent changes in the procedures used to select, convene, and operate National Research Council (NRC) and Environmental Protection Agency-Science Advisory Board (EPA-SAB) expert panels may erode the quality and efficiency of the advice they give to government. His recommendations are sensible, and many are already being followed.

However, I was disappointed that Anderson made only passing reference to the underlying cause of these changes. I have been closely involved with both the NRC and the SAB. In my view, both sets of recent procedural changes were undertaken in response to vigorous assaults mounted by a few outside interest groups that did not like the substance of the conclusions reached by NRC and SAB panels. Unable to gain traction on the substantive issues, these groups chose to attack the NRC’s and SAB’s procedures via lawsuits, a critical General Accounting Office report, and a variety of forms of public pressure.

If the advisory processes at NRC, SAB, and similar organizations are not to become increasingly “formal and proceduralized,” mechanisms must be found to protect them when external groups mount such challenges. Experienced lawyers like Anderson are in the best position to help develop such protection.

M. GRANGER MORGAN

Head, Department of Engineering and Public Policy

Carnegie Mellon University

Pittsburgh, Pennsylvania

[email protected]


The insightful observations and suggestions in Frederick R. Anderson’s article are in general accord with my own personal experiences with the NRC. I chair the Board on Environmental Studies and Toxicology (BEST), have chaired several NRC committees, and have served on others.

The steps to improvement that Anderson suggests are appropriate. Indeed, the NRC has already adopted most of them. The NRC provides an opportunity for the public to comment on the proposed members of study committees, although the NRC retains final authority for the composition of its committees. The NRC seeks to populate its committees with the best available scientific and technical experts, making sure that all appropriate scientific perspectives (and there are typically more than two!) on the issues are represented. An NRC official conducts a bias disclosure discussion at the first meeting of an NRC committee, at the end of which the committee determines whether its composition and balance are appropriate to its task. All information-gathering meetings of committees are open to the public, but meetings during which the committee discusses its conclusions and drafts its report are, of necessity, closed sessions. The extensive NRC review process is essential. Without exception, NRC reports are improved by review.

These important operational procedures need to be supplemented by additional practices, besides those highlighted by Anderson, that facilitate the smooth functioning of committees and increase the probability that consensus can be reached. I emphasize two of them here.

First, great care and thought needs to be given by the sponsoring NRC board to drafting the mandate to committees. A mandate must clearly describe the scope of the study and the issues for which advice is sought; it must be drawn to emphasize that the task of the committee is to analyze the scientific and technical data that should inform relevant policy decisions, not to make policy recommendations. A poorly drafted mandate fosters unproductive debate among committee members about the scope of their task and often generates divisiveness within the committee.

Second, the choice of the committee chair is critical. Committee members can, and generally do, have biases, but the chair must be perceived to be free of bias. The chair’s task is to help members work together to reach consensus. Committee chairs should be well informed about the issues but must not be driven by personal opinions. Achieving balance is the most difficult task faced by any committee chair.

With the NRC’s detailed attention to mandates, committee composition, and thorough review, NRC committees generally achieve consensus and offer sound advice to government on a wide range of important issues. Thus far, demands for balance and transparency have not compromised the NRC’s independence. Nevertheless, Anderson raises a valid concern that too much transparency and demands for political correctness could weaken the advisory process. We need to remain vigilant.

GORDON H. ORIANS

Professor Emeritus of Zoology

University of Washington

Seattle, Washington

Forging a Science-Based National Forest Fire Policy

Large, intense forest fires, along with their causes and their consequences, have become important political and social issues. In the United States, however, there is no comprehensive policy to deal with fire and fuels and few indications that such a policy is in development.

Fire is, of course, a natural element of many wildlands. Forests are accumulations of combustible organic matter that can be set ablaze by lightning, a lit cigarette or match, or even sunlight focused through a lens. First to ignite are fine fuels such as pine needles, leaves, and twigs, but as heat accumulates, the bigger fuels such as shrubs and trees start to burn. If fuels are sufficient and environmental conditions, especially wind, are suitable, the fire will torch, move into tall tree canopies, and spread from tree to tree, producing a crown fire. Many of the fires that raged in the western United States during the summer of 2003 and in previous summers have been of this most destructive type.

A substantial amount of scientific evidence indicates that, in many North American forests, accumulations of fuels have reached levels far exceeding those found under “natural” or pre-European settlement conditions. These fuel accumulations result from human activities, including fire suppression, grazing, logging, and tree planting. Uncharacteristically high fuel levels create the potential for fires that are uncharacteristically intense. Millions of acres in western North America harbor these unprecedented fuel stores, although the total is probably less than the 190 million acres identified in the Bush administration’s Healthy Forests Initiative.

A national forest fire policy should cover every aspect of fire control: Managing fuels within forests and landscapes; fire suppression; and, ultimately, salvage and restoration treatments after wildfire. Currently in the United States, individual land management agencies such as the Forest Service and National Park Service have established fire policies and modify them periodically. But these are largely within-agency policies that have not been subject to public debate and review. Fire suppression activities on the local and national levels are coordinated among government organizations through formal agreements. Because of the different missions of these agencies, interagency policies are largely procedural checklists of actions that collectively constitute agency-specific fire management policies and goals.

De facto 20th-century national fire policy focused primarily on fire suppression rather than on the full array of relevant management tactics. During the past 40 years, some deviations from these policies have emerged, chiefly the adoption of natural fire and prescribed burning programs, particularly in national parks and wilderness areas. But aggressive suppression policies have continued to dominate. Indeed, they have actually been reinforced as a result of large intense fires that have invaded places where people live. As a universal panacea, however, suppression has failed. So the policy focus has shifted to another “universal” solution: the reduction of forest fuels via physical removal or prescribed burning.

Current efforts to develop national policies on fuels and fire include the administration’s initiative and the Healthy Forests Restoration Act (H.R. 1904), which the House of Representatives passed in the summer of 2003 to implement the administration’s proposal. However, these efforts focus on the short-term treatment of forest fuels rather than on developing a comprehensive national policy on fuels and fire management and identifying the scientific and social elements of such a policy.

Most of the provisions of the administration initiative and H.R. 1904, for example, deal primarily with reducing requirements for environmental analyses of fuel treatment projects, limiting public appeals, and requiring prompt judicial response to legal challenges. These are procedural matters and do not address substantive issues such as where, how, and why fuel projects are to be conducted. The assumption appears to be that if we free resource managers from procedural constraints, they will make the appropriate decisions about where, how, and why. Other elements of the proposals deal with important but peripheral issues, such as attempts to increase the value of forest biomass by creating biomass markets.

These efforts contribute little to either a definition of or a long-term commitment to a comprehensive national policy on forest fuels and fire management. They also address few of the scientific and technical elements underlying management programs. Indeed, the forest condition classification used in these initiatives to identify forests at risk is a modeled coarse-scale spatial analysis of fuels and potential fire regimes that has serious deficiencies as the primary basis for identifying forests that are vulnerable to uncharacteristic intense fires.

A comprehensive national forest fire policy should consider all aspects of wildfire management, not just fuels and fire suppression. This policy needs to deal with long-term management of fuels and wildfire and consider the full range of ecological and social values, including issues related to forest health and the well-being of communities and people. Fire and fuel policy should also be an integral part of an overall vision for stewardship and management of the nation’s forests.

To be rational and effective, this fire policy should be grounded on scientific principles and data. Relevant scientific information already exists on three essential topics. First is knowledge of pre-European settlement fire patterns in the major forest types and regions of North America. Second is the effects of human activities on fuels and normal fire frequency. The third is forest ecology, including tree regeneration and succession after wildfire.

We have identified several scientific issues that should be considered in developing a national forest fire policy. Some of these issues, such as prescriptions for fuel treatments and landscape-level planning, are not appropriately considered at the level of national policy, but they are scientific and technical issues that need to be understood by those developing and debating national policy. Our objective is to make clear that there is a large base of scientific knowledge available for developing a national forest fire policy.

All forests are not alike

The coastal rainforests of the Pacific Northwest and the arid pine forests of the Southwest are not comparable ecologically and present quite different opportunities and social risks. Why should they be governed by similar policies? The starting point for any rational fire policy is recognition that different forest types and regions vary widely in their characteristic or natural fire patterns. A science-based fire management policy must accommodate this variability.

Before effective fire suppression began early in the 20th century, many forests of ponderosa pine and mixed conifers in western North America were subject to frequent low- to moderate-intensity fires; fire return intervals of three years to two or three decades were common. Fire suppression programs have been so effective that they have allowed the fuel loads in these forests to accumulate to levels that create the potential for previously unknown intense stand-replacement fires, which kill all or most of the large trees.

Stand-replacement fires are characteristic of many other western and boreal forest types, however. Pacific slope forests of Douglas fir and associated species in the Pacific Northwest are an outstanding example; stand-replacement fires typically occur at intervals of 250 to 500 years in these forests. Most of the subalpine or high mountain forests of western North America—composed of spruce, true fir, and lodgepole pine—are also of this type. Fire suppression programs have not modified fuel loads and fire patterns significantly in these forests. Indeed, fuel treatments sufficient to modify fire behavior in these forests would produce very unnatural forest ecosystems. For example, treating fuels to eliminate stand-replacement fires in coastal Douglas fir forests would result in forests that no longer provided suitable habitat for northern spotted owls and many other old-growth-related species.

These differences in typical fire patterns among forest types should influence fire suppression as well as fuel treatment policies. Active efforts to suppress fire can be appropriate in forests subject to stand-replacement fire, particularly where important resources are at risk. Wildfire suppression will often be inappropriate, however, in forest types that were characteristically subject to frequent low- to moderate-intensity fire levels.

Neither fire suppression nor treatment of forest fuels should be seen as a universal panacea.

Variability in forest fire patterns can be very local as well as regional, and fire policies must recognize that. Many forest landscapes, particularly in western North America, are actually mosaics of forests with contrasting fire patterns. Forest conditions and characteristic fuel loadings, fire patterns, and suppression policies may differ sharply on adjacent north and south slopes or at different elevations in the same river valley, with low-intensity fires at low elevations and on south slopes and stand-replacement fires on north slopes and higher elevations.

Differentiating forest community types

Some have argued that it is impossible to address forest variability in devising national policy. Science-based stratifications are too complex to be comprehended and incorporated into legislation appropriately, they say. We say that it is not only possible but also imperative to recognize local variations and fundamental differences among forest types as a part of national policy.

Fortunately, there is already a national classification of forest types that incorporates characteristic fire patterns and fuel loadings and can be used as the basic stratification for implementing fire management policies. This is the comprehensive plant association or habitat type classification system developed for wildlands by scientists in federal agencies. There are hundreds of individual plant associations, but these are easily gathered into a much more limited number of plant association groups (PAGs) that have comparable fire patterns and appropriate fire management policies. A particular strength of the PAG classification is that it is just as relevant for national policies as it is for a resource manager planning a fuels management program within a local watershed. This classification can also be applied to other contentious issues, such as management of old-growth forests, where it can provide a solid scientific basis for policy decisions.

Transient conditions, such as classifications of fuel loadings, are not appropriate as the primary basis for developing and applying fire policies. An example is the wildland fire and fuel management spatial data set and current condition classification recently published by the Rocky Mountain Research Station in Fort Collins, Colorado. This is a coarse classification that was never intended for use at a local level. The five classes created to represent historical fire regimes also do not accurately portray conditions or risks in at least some forest regions, most notably western Washington and northwestern Oregon. The use of condition classes such as fuel loadings is most appropriate as a secondary stratification within the PAG classification.

A final important point is that, contrary to what one might expect, fire suppression has not necessarily had the greatest impact on fuel accumulations on sites and in forest types that historically have had the most frequent fires. After a century of fire suppression, a forest belonging to the Ponderosa pine PAG may be many fire-return intervals outside its historical range—perhaps 100 years without wildfire, where the historical pattern was a fire every 5 to 10 years. But the effects of 100 years of suppression on amounts and arrangements of fuels and the potential for an uncharacteristic stand-replacement fire actually may be much greater in a mixed conifer forest belonging to the white fir PAG, which is only three or four intervals outside its normal fire cycle of 10 to 60 years. This is because the white fir site is much more productive than the Ponderosa pine site, resulting in more rapid fuel accumulations and the development of white fir fire ladders between ground and crown fuels. Historic fire-return intervals are therefore not always the best basis for setting fuel treatment priorities.

Uncharacteristic patterns

The uncharacteristic live and dead fuel loadings, fire behaviors, and fire effects that the United States has experienced in the past few years are not just the result of fire suppression. They are also the result of human activities including grazing, logging, and planting dense strands of trees after green or salvage logging. The importance of these human activities varies with locale. Many sites have been affected by multiple activities that are often synergistic in their effects. Programs to correct these conditions probably also need to vary.

Humans create uncharacteristic fuel loadings both actively and passively. With wood production as a primary management objective, foresters have established dense, fully stocked forest stands on sites formerly occupied by open stands with fewer trees. In national forests on the western slopes of the Sierra Nevada, thousands of acres of open forests dominated by old-growth pine have been converted to dense single-age plantations during the past 50 years. In many areas throughout western North America, uncharacteristic stand-replacement wildfires have been followed by reforestation programs that recreate the dense young forests, providing the potential for yet another stand-replacement fires.

Fire management programs should also address the ability of a stand of trees to persist through a fire and to recover after one. Effective prescriptions for fuel treatments must, therefore, include both the amounts and spatial distribution of the fuels and the retention of the most fire-resistant trees. There are four key elements to consider: surface or ground fuels, ladder fuels, overstory canopy density or continuity, and large trees of fire-resistant species. National legislation is not likely to address technical details such as these, but individuals debating and formulating fire policy should at least know what kind of stand treatments actually influence fire behavior. Traditional commercial logging activities are focused on the removal of large saleable trees, not the amount and arrangement of these fuel elements.

The potential effectiveness of a proposed project to reduce fuels and alter fire patterns can be judged by whether the treatment deals with at least one or preferably all three of the fuel elements: surface fuels, ladder fuels, and canopy density. Surface fuels include grasses, shrubs, and tree seedlings, as well as litter and woody debris on the forest floor. Surface fuels are removed primarily to reduce potential flame lengths to acceptable levels. Ladder fuels typically consist of small and intermediate-sized trees, and treatment is aimed at reducing the ability of fires to move from the ground into the crowns of large trees. Overstory canopy density influences the ability of a fire to spread through the tree crowns, so the goal is to increase the spacing between them.

The scientific consensus is that large and old trees should generally be retained, especially fire-resistant species such as pines. Indeed, from an ecological perspective these are absolutely the last trees that should be removed. Large and old trees are the most likely to survive a fire and subsequently serve as focal points for recovery. Large and old trees are also critical wildlife habitat, in part because they are the source of the standing dead trees (snags) and logs where animals live. Large old trees are essentially irreplaceable because they take centuries to reach that state.

There is no agreement, however, on how best to incorporate the retention of large and old trees into policy and regulation. Proposed approaches have included diameter limits (cut no tree larger than “x”), age limits (cut no tree older than “x”), and leaving the top “x” percentile of the largest trees in the stand.

One complication is that the definition of a large and old tree varies because of differences in species and site productivity. Hence, large-tree retention guidelines need to accommodate site-to-site variability. Here, once again, the PAGs can help provide appropriate site-based guidelines.

Another complication is that removing large trees is sometimes necessary to achieve overall fuel treatment goals. Relatively large trees of shade-tolerant species such as white fir (those 21 inches or more in diameter at breast height) have developed on many productive mixed-conifer sites since fire suppression programs were instituted a century ago. These trees often provide the fuel ladders that put old-growth pine or giant sequoia trees at risk, as well as increasing overall stand canopy densities. Both conditions greatly increase the potential for stand-replacement fires. Restoring characteristic fuel loadings and wildfire behavior, to say nothing of prescribed burning programs, often requires removal of some of these larger but relatively young trees.

Retaining large and old trees is one of the most contentious ecological issues in the current debates over fuel reduction programs. Environmentalists often view large tree removal as motivated by economic goals rather than ecological objectives; a potential wedge for the resumption of large-scale commercial logging on public lands. Other participants in the debate, including the current administration, view the removal of large trees as necessary to pay for expensive fuel treatments and to provide wood to support local industries. Many managers view the issue simply in terms of balancing effective fuel treatments with other ecological or economic objectives.

Although there is a large base of scientific knowledge available for developing a national forest fire policy, it is largely ignored in current policy proposals.

Logging as a part of fuel treatment programs is an issue that deserves serious consideration by everyone in the forest fire policy debates. On the one hand, traditional commercial logging operations are unlikely to improve fuel loadings significantly or alter potential fire behavior for the better. Such operations are not focused on the key ground and ladder fuels, and they also contribute additional ground and ladder fuels in the form of debris called slash. On the other hand, effective fuel removal is expensive when high densities of ground and ladder fuels exist, because at least some of them have to be removed, burned, or otherwise treated. Project costs can often exceed $1,000 per acre for an initial fuel treatment. Logging of small trees will rarely cover even the direct costs of fuel treatments because such trees currently have little economic value and are likely to have even less in the future. Hence, subsidizing fuel treatments by selling medium-sized trees that need to be removed anyway seems appropriate, given the scale of the challenge and the desire to reduce the impact on taxpayers.

No magic bullet

An effective national policy on forest fuels and fire management requires sustained long-term programs involving several treatments. Today’s conditions have been developing for more than a century and generally cannot be corrected with a single treatment. In a stand with significant fuel accumulations, for example, an initial prescribed burn will typically generate additional fuel. A burn kills trees and shrubs but often does not consume them; instead, it turns them into dead fuel. Relatively prompt follow-up treatment, such as a second prescribed burn, may be needed to eliminate the new fuel.

Fire management programs require repeated treatments that are planned and implemented at appropriate spatial scales. Forests will continue to regenerate and, in the process, accumulate fuels, sometimes (as in the moist mixed-conifer zones) at high rates. Fuel treatments and prescribed burns must be at a sufficient scale to affect the behavior of the fire. Studies of recent fires such as the 1994 Wenatchee fires in Washington and the 2002 Hayman fire in Colorado show that small treated areas surrounded by areas with high fuel loadings and potential firestorms often do not survive, let alone significantly affect overall fire behavior. Designing treatments as part of a strategic landscape plan also is critical. One example is locating fuel breaks so as to limit fire spread and serve as anchor points for more widespread prescribed fire.

National policy must also take into consideration the fact that human habitation and development are increasingly intermixed with forests, making them potentially vulnerable during wildfires. The wildland/development interface is emphasized in current policy initiatives. However, fuel treatments of forests outside this interface are necessary to prevent significant losses of forest attributes that are important to society, such as wildlife habitat and watersheds. Large areas of the Sierra Nevada mixed-conifer forest, for example, are likely to experience uncharacteristic stand-replacement fires without active fuel treatments and prescribed burning programs, with the resulting loss of critical watershed and habitat for the California spotted owl and other endangered wildlife. Substantial restoration efforts will be needed outside of the wildland/development interface to protect them.

Some participants in fire management policy debates argue that wildland forests can and should be left to “natural” restorative processes. Unfortunately, today’s ecological and social conditions differ greatly from past conditions, making many fires and their consequences undesirable. Large unnatural accumulations of fuels result in fires of unprecedented intensity; and exotic plants, pests, and pathogens alter recovery processes dramatically, further modifying a landscape in which critical habitat for native biodiversity is already severely limited. Nature will “correct” the unstable conditions that humans have created in the fire-prone wildlands, but the new landscape will not resemble presettlement forests. Letting nature take its course in the current landscape is certain to result in losses of native biodiversity and ecosystem functions and other social benefits.

No back to the future

The goals of restoration—sometimes described as a “desired future condition”—are often based on a hypothesized “natural” condition that existed before European settlement. The objective is to bring forest composition and structure, including fuel loadings, back within the range of conditions that existed before the fire suppression policy began. The wildland/development interface is an exception; management goals there relate to human health and welfare rather than the health and welfare of the forest.

But it is high time to consider desired future conditions that are unprecedented but ecologically sustainable. Restoring forests to an approximation of their state in the 19th century may be appropriate in some areas, but fire management policies need to consider a broader spectrum of possibilities. Today’s fragmented landscapes and aggressive introduced organisms mean that 19th-century conditions can never be replicated precisely, although they might be approximated.

In addition, people prize forest attributes that are different from those of the past. They may value conditions that were not part of the presettlement forest, such as abundant browse for wildlife. They may abhor some normal presettlement conditions, such as pervasive smokiness. Some of these desires may be mutually exclusive, but others may be achievable and sustainable.

Thus, it is inappropriate to base management goals exclusively on some previous real or hypothesized condition, particularly outside of wilderness and other natural areas. Since we can’t go home again, we must think seriously about working toward forests that differ, sometimes significantly, from those of the past. The potential for defining and evaluating alternative sustainable goals and, ultimately, managing to achieve the ones people want, is improved greatly by rapid recent expansions in scientific understanding of the natural history of species, forest ecosystems, landscapes, and disturbances.

After the fire

What are appropriate restoration treatment policies after a fire? The topic is contentious, involving matters such as timber salvage and seeding or planting of plant cover. But there, too, significant new scientific knowledge can be of help.

Natural forest disturbances, including fire, kill trees but remove very little of the total organic matter. Combustion rarely consumes more than 10 to 15 percent of the organic matter, even in stand-replacement fires, and often much less. Consequently, much of the forest remains in the form of live trees, standing dead trees, and logs on the ground. Also, many plants and animals typically survive such disturbances. This includes living trees, individually and in patches.

These surviving elements are biological legacies passed from the predisturbance ecosystem to the regenerating ecosystem that comes after. Biological legacies are crucial for ecological recovery. They may serve as lifeboats for many species, provide seed and other inocula, and enrich the structure of the regenerated forest. Large old trees, snags, and logs are critical wildlife habitat and, once removed, take a very long time to replace.

Management of postburn areas, including timber salvage, needs to incorporate the concept of biological legacies. Salvaging dead and damaged trees from burns involves the ecology of a place, not simply economics and fuels. In addition to effects on postfire wildlife habitat, there are also effects of salvage logging on soils, sediments, water quality, and aquatic organisms. Significant scientific information exists on this topic as well as on biological legacies.

Biological legacies differ by orders of magnitude in natural forests, a fact that should guide restoration programs. Where stand-replacement fires are characteristic, such as with lodgepole pine and Pacific Coast Douglas fir forests, massive areas of standing dead and down trees are usual; salvage operations generally are not needed and do not contribute to ecological recovery, even though they do provide economic return. On the other hand, uncharacteristic stand-replacement fires in dry forests can produce uncharacteristic levels of postfire fuels, including standing dead and down trees. Removing portions of that particular biological legacy may be appropriate as part of an intelligent ecological restoration program, and not simply as salvage.

From an ecological perspective, large, older, fire-resistant trees are the last ones that should be removed in any fuel treatment or post-fire restoration program.

Policies regarding artificial revegetation after wildfires, such as seeding grasses or other plants and tree planting, also need to be based on credible current science. There are many tradeoffs. Seeding to provide rapid protective cover may interfere with natural recovery and introduce exotic plants. Native plants that regenerate from seed rather than by sprouting will suffer from competition with seeded grasses.

Decisions regarding planting trees need to be based on ecological and economic objectives as well as characteristics of the forest type. Where timber production is a primary objective and dense forest stands are characteristic, reestablishing plantations of commercial tree species is often appropriate. However, establishment of dense forests is inappropriate where they did not exist before. Doing so simply recreates the potential for uncharacteristic fuel loadings and fires. Such a naturally unsustainable condition is only appropriate if there is a serious long-term commitment to managing the site for intensive timber production. But this does not apply to many federal forests where intensive wood production is neither consistent with ecological goals nor economically sound.

Tree planting to reestablish closed forest cover on burned sites also may be a bad idea, depending on ecological objectives. Large disturbed areas, which regenerate slowly and include complete legacies of snags and logs, are often hotspots for regional biodiversity. The unsalvaged and unplanted areas devastated by the 1980 volcanic eruption of Mount St. Helens, for example, possess extraordinarily rich communities of birds, amphibians, and midsized mammal predators. Aggressive timber salvage and tree planting programs dramatically limit both the biological potential and the duration of this early successional stage.

In short, postfire treatment policies such as timber salvage, seeding, and replanting should incorporate current scientific findings, especially about how forest ecosystems recover from natural disturbances. We do not want to create new problems or perpetuate old ones by salvaging too much or too little or by establishing dense new plantations on burned sites where timber production is not a primary objective.

There is one way in which current administrative and legislative efforts, typified by the administration’s Healthy Forests Initiative and HR 1904, do set fire policy. They assume that we will treat forest fuels in the wildland/development interface to reduce loss of structures and life. Although there is no stated national policy on dealing with fires in these intermixed landscapes, there has long been a de facto policy in the United States and Canada that human developments interspersed among wildlands will be protected from fire. In some mixed landscapes, requirements for safer building materials and clearing vegetation away from houses are emerging, but it is not clear who will enforce these rules or how. The assumption that human settlements will be protected no matter where they are has deep roots in history and is even backed by some case law, as in the protection of Southern California houses on chaparral-covered hillsides.

But aside from this underlying assumption, these proposals fail to incorporate most of the elements that we have identified as the basis for a scientifically credible forest fuels and fire management policy. They do not take account of the variability of natural fire patterns, and they do not recognize that fire policy needs to accommodate this variability. Most fundamentally, however, the initiatives set no goals for a comprehensive national forest fuel and fire management policy or for the long-term commitments necessary to implement such a policy.

Crime Labs Need Improvement

In their examination of the criminal convictions of 62 men who were later exonerated by DNA evidence, Barry Scheck, Peter Neufeld, and Jim Dwyer concluded that a third of the cases involved “tainted or fraudulent science.” Although in some cases rogue experts were directly to blame, a much larger problem exists: The forensics profession lacks a truly scientific culture—one with sufficient written protocols and an empirical basis for the most basic procedures. This results in an environment in which misconduct can too easily thrive. Stated another way, forensic science needs more science.

On an individual level, one of the most notorious cases involved Fred Zain, the chief serologist of the West Virginia State Police Crime Laboratory. A judicial report found that Zain committed many acts of misconduct over 10 years, including overstating the strength of results, reporting inconclusive results as conclusive, repeatedly altering laboratory records, grouping results to create the erroneous impression that genetic markers had been obtained from all samples tested, and failing to report conflicting results. In reviewing the report, the West Virginia Supreme Court spoke of “shocking and egregious violations” and the “corruption of our legal system.”

On a systemic level, perhaps the best example is the Federal Bureau of Investigation (FBI) laboratory, considered to be the country’s premier crime lab. A 1997 Inspector General’s report on the lab found scientifically flawed testimony, inaccurate testimony, testimony beyond the competence of examiners, improper preparation of laboratory reports, insufficient documentation of test results, scientifically flawed reports, inadequate record management and retention, and failures of management to resolve serious and credible allegations of incompetence. The report’s recommendations are revealing because they are so basic. They include: seeking accreditation of the laboratory by the American Society of Crime Laboratory Directors/Laboratory Accreditation Board (ASCLD/LAB); requiring examiners in the explosives unit to have scientific backgrounds in chemistry, metallurgy, or engineering; mandating that each examiner prepare and sign a separate report instead of a composite report without attribution to individual examiners; establishing a review process for analytical reports by unit chiefs; preparing adequate case files to support reports; monitoring court testimony in order to preclude examiners from testifying to matters beyond their expertise or in ways that are unprofessional; and developing written protocols for scientific procedures. In short, the report called for scientific management.

Accreditation lacking

More than a decade ago, molecular biologist Eric Lander, who served as an expert witness in one of the first court cases involving DNA evidence, noted: “At present, forensic science is virtually unregulated, with the paradoxical result that clinical laboratories must meet higher standards to be allowed to diagnose strep throat than forensic labs must meet to put a defendant on death row.” Since that time, there have been a number of voluntary attempts to improve crime laboratories, such as the accreditation process of the ASCLD/LAB. Nevertheless, except for New York, Texas, and Oklahoma, there is no mandatory accreditation. A similar situation exists with death investigation agencies accredited by the National Association of Medical Examiners. Although 40 medical systems have been accredited, they cover only 25 percent of the population. In addition, accreditation rates are low for practicing forensic scientists, even though forensic certification boards for all the major disciplines have been in existence for more than a decade.

Although it is not the only reason, lack of funding is a major contributing factor to these failures. Meeting accreditation and certification standards costs money, and crime labs have been chronically shortchanged. In 1967, President Johnson’s Crime Commission noted that, “The great majority of police department laboratories have only minimal equipment and lack highly skilled personnel able to use the modern equipment now being developed.” In 1974, President Nixon’s Crime Commission commented: “Too many police crime laboratories have been set up on budgets that preclude the recruitment of qualified, professional personnel.” Twenty years later, an investigation of Washington state crime labs revealed that a “staggering backlog of cases hinders investigations of murder, rape, arson, and other major crimes.” At any time, “thousands of pieces of evidence collected from crime scenes sit unanalyzed and ignored on shelves in laboratories and police stations across the state.” A USA Today survey in 1996 reached the same conclusion: “Evidence that could imprison the guilty or free the innocent is languishing on shelves and piling up in refrigerators of the nation’s overwhelmed and underfunded crime labs.” In one case reported by the newspaper, a suspected serial rapist was released because it was going to take months to get the DNA results needed to prove the case. Weeks later, the suspect raped his fourth victim as she slept in her home. When the DNA tests finally came back—18 months after samples first went to the lab—a jury convicted the suspect of all four rapes.

It is clear, then, that to improve scientific evidence in criminal cases, the nation’s crime laboratories must be improved. They need to be funded so they can be accredited and their examiners certified. The lessons learned from the DNA admissibility wars should not be forgotten. Valid protocols and rigorous proficiency testing are important. As the FBI’s leading DNA expert later conceded, there were significant problems when DNA evidence was first introduced in court: “The initial outcry over DNA typing standards concerned laboratory problems: poorly defined rules for declaring a match; experiments without controls; contaminated probes and samples; and sloppy interpretation of autoradiograms. Although there is no evidence that these technical failings resulted in any wrongful convictions, the lack of standards seemed to be a recipe for trouble.” Moreover, the National Research Council’s (NRC’s) first of two reports on DNA observed: “No laboratory should let its results with a new DNA typing method be used in court, unless it has undergone such proficiency testing via blind trials.” The same types of standards now required for DNA testing should apply to all forensic examinations.

The need for basic research

Another critical issue is the lack of basic scientific research. Many forensic techniques were developed in crime labs, not research labs, and they gained judicial acceptance before the demanding standards of the Supreme Court’s Daubert decision were in place. As recent cases have demonstrated, there is an embarrassing lack of empirical research on well-accepted techniques such as fingerprinting, firearms identification, and bite-mark comparisons.

Hair comparison evidence illustrates both the lack of empirical research and the misuse of expert testimony. Most courts have upheld the admissibility of hair comparison evidence. After Daubert was decided, however, the district court in Williamson v. Reynolds, a habeas corpus case, took a closer look at this type of evidence. In the case, an expert testified that hair samples were “microscopically consistent,” explaining that “hairs are not an absolute identification, but they either came from this individual or there is [or] could be another individual somewhere in the world that would have the same characteristics to their hair.” The district court noted that the expert did not explain which of the approximately 25 characteristics were consistent, any standards for determining whether the samples were consistent, how many persons could be expected to share this same combination of characteristics, or how he arrived at his conclusions. Moreover, the court professed that it had been “unsuccessful in its attempts to locate any indication that expert hair comparison testimony meets any of the requirements” of Daubert. The court further observed: “Although the hair expert may have followed procedures accepted in the community of hair experts, the human hair comparison results in this case were, nonetheless, scientifically unreliable.” Finally, as is often the case, the prosecutor exacerbated the problem by telling the jury during his closing argument that a match existed. Even the state court misinterpreted the evidence, writing that the “hair evidence placed [petitioner] at the decedent’s apartment.” The district court decision was subsequently reversed because due process, not Daubert, provided the controlling standard for habeas review. The accused, however, was later exonerated by exculpatory DNA evidence, and as Scheck and his colleagues observe, “The hair evidence was patently unreliable.”

Although the FBI and the National Institute of Justice should fund research on forensic science, it should be done by independent organizations.

In another case, the expert testified that the crime scene hair sample “was unlikely to match anyone” other than the defendant, Edward Honaker, who had been charged with rape. This conclusion was a gross overstatement. At best, the expert could have testified that the crime scene hairs were “consistent with” the defendant’s exemplars, which means that they could have come from Honaker or thousands of other people. We simply have no idea how many other people have the same characteristics. Honaker spent 10 years in prison before being exonerated by DNA analysis.

Roger Coleman was executed in 1992 for a slaying in rural Virginia. The same expert who had testified against Honaker also testified against Coleman, and in the same manner. The U.S. Supreme Court ruled that a lawyer’s mistake in filing Coleman’s state collateral appeal one day late precluded federal habeas review. Serious questions about Coleman’s innocence have since been raised, and the prosecution’s use of the hair evidence was, to say the least, suspect. While conducting research for his book on the Coleman case, John Tucker interviewed the trial judge, who said he thought the expert’s testimony about the comparison of the pubic hairs had the most powerful impact on the jury. It was, the judge said, the first and only testimony that seemed to tie Coleman to the murder. As Tucker correctly notes: “A finding of consistency is highly subjective, and experts may and often do disagree about such a finding.” Tucker describes the testimony as follows: “Nor did [the expert] compare the pubic hairs found on Wanda [the victim] with anyone other than Coleman and Wanda herself—not even her husband Brad. Nevertheless, when he asserted that he had made a comparison of those hairs with Roger’s pubic hair, and that the hairs were ‘consistent’ with each other, meaning, he said, that is was ‘possible, but unlikely’ that the hairs found on Wanda could have come from anyone other than Roger Coleman, the jurors exchanged glances and settled back in their seats.”

Only the federal government, specifically the FBI and the National Institute of Justice, has the resources to fund the needed research on forensic science. The actual research, however, should be done by independent organizations such as the NRC, which in addition to its DNA reports has conducted studies on voiceprints, polygraph tests, and bullet lead comparisons.

As DNA evidence has demonstrated, expert testimony based on scientific and technical knowledge is often better than other types of evidence commonly used in criminal trials. The danger of eyewitness misidentification has long been recognized and is the single most important factor in wrongful convictions. The unreliability of jailhouse snitches also comes as no surprise. The use of confessions that later turn out to be false has also been documented. Indeed, in Escobedo v. Illinois, the Supreme Court observed: “We have learned the lesson of history, ancient and modern, that a system of criminal law enforcement which comes to depend on the ‘confession’ will, in the long run, be less reliable and more subject to abuses than a system which depends on extrinsic evidence independently secured through skillful investigation.” Nevertheless, the advantages of scientific and technical evidence depend on its reliability, and that turns on whether forensic science is truly a scientific endeavor.

From the Hill – Fall 2003

Big R&D spending increases slated for defense, homeland security

Congress appears poised to substantially increase overall R&D spending for fiscal year (FY) 2004. However, virtually all of the increases would go for defense, homeland security, and health spending.

As of the August congressional recess, the House had approved 11 of the 13 appropriations bills; the Senate, only four. In the House plan, the federal R&D portfolio would increase by $8.4 billion, or 7.2 percent, to $125.9 billion, which is $3.6 billion more than the Bush administration’s request. About 99 percent of the increase would go to the Department of Defense (DOD), the Department of Homeland Security (DHS), and the National Institutes of Health (NIH). Spending would be flat overall for all other R&D agencies, with modest increases for some offset by cuts in others.

The House would boost DOD R&D by $7.2 billion, or 12.3 percent, to $66 billion, with weapons systems development accounting for $6.1 billion of the increase. DOD’s science and technology (S&T) account would be increased by 9.7 percent to $12.3 billion. DHS would see its R&D portfolio surge by $385 million, or 57.5 percent, to $1.1 billion, as the new department ramps up its S&T capabilities.

After five years of annual 15 percent increases, NIH budget growth would slow considerably in FY 2004. The House would match the president’s request with a 2.7 percent increase in R&D spending—a $702 million addition.

The National Science Foundation (NSF) R&D budget would increase by 6.2 percent to $5.6 billion under the House plan. But that would be about $1 billion less than is needed to fulfill the House’s pledge to double NSF’s R&D budget between FY 2002 and 2007.

The Department of Energy’s (DOE’s) Office of Science would receive a 4.3 percent boost to $3.2 billion, and the National Aeronautics and Space Administration’s R&D portfolio would edge up 0.9 percent to $11.1 billion. There would be steep cuts in R&D spending in the U.S. Department of Agriculture (9.8 percent), the Department of Transportation (15 percent), and the Department of Commerce (21.5 percent).

The House’s focus on defense and homeland security means that after several years of near-parity between defense and nondefense R&D spending, the defense share of federal R&D spending would rise to 56 percent.

In the limited numbers of agencies on which the Senate has taken action, its proposed funding levels differ only modestly from those of the House. The Senate would provide a 3.8 percent increase for NIH, a 4.7 percent increase for DOD S&T, and a 1.2 percent increase for DOE’s Office of Science.

DOE to compete Los Alamos National Laboratory contract

In the wake of several years of controversy about the national weapons labs, DOE has said that for the first time, competitive bidding will be used when the contract with the University of California (UC) to manage and operate Los Alamos National Laboratory expires in 2005. In addition, the House has added a rider to an appropriations bill that requires competition for some other DOE labs as well.

A host of problems has resulted in intense scrutiny of the labs. Allegations of espionage; lost hard drives containing classified information; and, most recently, accusations of financial mismanagement have hit Los Alamos. Lawrence Livermore National Laboratory, also managed by UC, has faced controversy over alleged security lapses and cost overruns during construction of the National Ignition Facility.

In an attempt to shore up the security of classified weapons programs, Congress in 2000 created the National Nuclear Security Administration, a semi-autonomous agency within DOE, to oversee the nation’s nuclear weapons complex, including Los Alamos and Livermore. However, the reorganization failed to quell controversy about the labs, and criticism flared up late last year when allegations of financial abuses at Los Alamos prompted the resignation of the lab’s director and management reviews by UC and DOE.

The origin of the relationship between UC and DOE dates to 1942, when Los Alamos was founded by J. Robert Oppenheimer, a UC Berkeley physicist who led the Manhattan Project, the secret federal research program to develop an atomic bomb. The following year, UC agreed to manage the facility for the federal government. Thus, a partnership was created to run Los Alamos as a government-owned contractor-operated (GOCO) laboratory. The partnership allowed the lab to benefit from the university’s ability to draw talent, even as it worked on such an inherently governmental function as nuclear weapons development. The GOCO model was adopted at numerous other government labs.

Ever since UC took over Los Alamos in 1943 and Livermore in 1952, the management contracts between the university and the government have always been renewed. But in the aftermath of each scandal, critics have called for opening the contracts to competition.

“Periodic competition should be normal,” said Rep. Billy Tauzin (R-La.), the chairman of the House Energy and Commerce Committee, at a May 1 hearing. “But the pressure of competitive bidding, one of the most powerful cleansers of management problems, has never really bore down on those responsible for the [Los Alamos] lab’s contract.”

Sen. Pete Domenici (R-N.M.), a staunch supporter of the labs, has also expressed support for competition. “We all know that the present manner in which the laboratory is managed must change in ways that are inevitable,” he said in an April speech at Los Alamos. “I worry that the attacks on Los Alamos will only intensify if we do not take dramatic action to improve the lab’s management and reputation.”

Others, however, worry that the uncertainty surrounding competition and the possibility of losing UC’s generous employee benefits could lower morale at the lab and cause a wave of retirements, hindering the lab’s scientific work. Recognizing these concerns, Domenici said he would support competing the contract only if all current employees, except for the most senior officials, are retained and current compensation and retirement benefits are kept in place.

On April 30, Energy Secretary Spencer Abraham announced DOE’s intention to compete the Los Alamos contract in 2005, while endorsing conditions similar to Domenici’s.

UC has expressed reluctance about participating in the competition. “We want to compete and we want to compete hard,” UC President Richard C. Atkinson said in testimony before the Energy and Commerce Committee. “We believe, with every fiber of our institutional being, that continued UC management is in the absolute best interests of the nation’s security.” But he added that, “It is one thing to manage the national weapons laboratories at the request of the federal government because of the unique scientific capabilities of the university, and quite another to actively pursue what could now be interpreted as a business venture. I am not sure our faculty or the people of California would support such action.”

Some observers have expressed concern about the overall health of DOE’s GOCO partnerships. Siegfried Hecker, a former director of Los Alamos, recently lamented the lack of trust that has grown between DOE and its contractors. Speaking at a June 24 hearing before the House Energy and Natural Resources Committee, he said that DOE’s relationship with the laboratories, driven to a large extent by pressure from Congress, “has changed from one of partnership to an arms-length government procurement.” DOE has appointed a blue-ribbon commission to study these concerns, which is expected to issue a report in the coming months.

The decision by the House to add language regarding lab management to the DOE FY 2004 funding bill has added a new element to the debate. The clause, proposed by Rep. David Hobson (R-Ohio), requires a competitive bidding process for all DOE contracts that have not been competed in the past 50 years. This would affect several labs, including Los Alamos and Livermore; Lawrence Berkeley, which is also operated by UC; Argonne, which is run by the University of Chicago; and Ames, which is managed by Iowa State University.

Although the clause appears to be designed primarily to force competition at Livermore, which just celebrated its 50th anniversary, the other labs cited have generally received good grades for their operations, and some critics argue that competing such contracts would cost more money than it would save. In the case of Ames, the lab contract may be unattractive to other potential bidders because it is located on the Iowa State campus. Opponents of Hobson’s rider argue that DOE should retain the authority to make competition decisions on a case-by-case basis.

The language in the bill also includes a requirement that no conditions be imposed “that may have the effect of biasing the competition in favor of the incumbent contractor.” According to report language accompanying the bill, this provision is intended to prevent Abraham from requiring that the existing Los Alamos workforce be protected.

White House proposes standardizing peer review process

On August 29, the Office of Information and Regulatory Affairs (OIRA) in the White House Office of Management and Budget (OMB) issued a draft proposal to standardize the process that federal agencies use when conducting peer review of scientific information that will be used in setting regulations and policies. The proposal has been met with both praise and concern.

In announcing the standards, OIRA administrator John D. Graham stated, “Peer review is an effective way to further engage the scientific community in the regulatory process. A more uniform peer review policy promises to make regulatory science more competent and credible, thereby advancing the administration’s smart-regulation agenda. The goal is fewer lawsuits and a more consistent regulatory environment, which is good for consumers and businesses.”

The draft guidelines attempt to establish a process by which federal agencies would select peer reviewers, manage the review process, and access scientific information used in setting regulations or policies. For example, OMB recommends that agencies scrutinize scientists for “real or perceived conflicts of interests” in order to ensure that the individual can approach the subject in an “open-minded and unbiased manner.” In this regard, federal agencies should consider whether the individual “(i) has any financial interests in the matter at issue; (ii) has, in recent years, advocated a position on the specific matter at issue; (iii) is currently receiving or seeking substantial funding from the agency through a contract or research grant (either directly or indirectly through another entity, such as a university); or (iv) has conducted multiple peer reviews for the same agency in recent years, or has conducted a peer review for the same agency on the same specific matter in recent years.”

In a Washington Post article, a representative of the Center for Regulatory Effectiveness (CRE) said the OMB proposal would “put additional teeth in what is meant by peer review.” CRE further suggested that this would provide an opportunity to require federal agencies to reevaluate environmental and dietary guidelines.

The guidelines state that, “Agencies need not, however, have peer review conducted on studies that have already been subjected to adequate peer review.” Further, they state that when considering significant regulatory information, “peer review undertaken by a scientific journal may generally be presumed to be adequate.” However, they also note that, “This presumption is rebuttable based on a persuasive showing in a particular instance,” leaving a window of opportunity open to a reevaluation as suggested by CRE.

OMB Watch, a nonprofit organization that follows public right-to-know, budget, and regulatory issues, expressed concern that the proposal could create a centralized system “dangerously vulnerable to political manipulation.” For example, it cites a clause in the guidelines that would allow federal agencies to retain an outside entity to manage peer review. OMB Watch said that such direct control over the peer process by a nongovernmental organization could result in undue influence over regulations by industry groups.

The draft proposal, which was not published in the Federal Register but issued as an OMB Bulletin, is open for public comment until October 28, 2003, and is to take effect in January 2004.

Moderate Republicans ask Bush to revise stem cell policy

New research results have prompted moderate Republicans in the House and Senate to focus attention on the issue of federal funding for human embryonic stem cell research. In March, Johns Hopkins University researchers announced the discovery of a promising new way of growing stem cell lines. Until now, stem cells have been produced with the use of mouse cells as “feeder cells” to keep the stem cells from differentiating into more specialized tissue. However, cells that come into contact with mouse cells can be contaminated with animal viruses, presenting a danger to potential patients. The new technique uses human bone marrow cells instead of mouse cells, eliminating a potential obstacle from scientists’ long-term goal of using stem cell transplants to treat conditions such as diabetes and Parkinson’s disease.

President Bush’s stem cell research policy, announced on August 9, 2001, allows federally funded stem cell researchers to work only on cell lines already created at the time of the announcement, making research on lines created with the new method ineligible for federal funding.

Initially, the administration compiled a list of 78 eligible cell lines, but to date only 11 of these lines have been made available to researchers. Thus, many scientists had been pressing the president to revisit the policy even before the Johns Hopkins discovery.

In response to the concerns of stem cell researchers, 11 moderate House Republicans sent a letter to President Bush on May 15 urging him to review his stem cell policy to determine whether enough lines have been made available and whether changes should be made to allow for the creation of new lines such as those produced at Johns Hopkins. The letter’s signatories included Rep. Michael N. Castle (R-Del.), a leader of the Tuesday Group, an informal coalition of Republican moderates and Rep. Sherwood Boehlert (R-N.Y.), chairman of the House Science Committee.

In the Senate, meanwhile, Sen. Arlen Specter (R-Pa.), the chairman of the Labor, Health and Human Services (HHS) Appropriations Subcommittee, held a May 22 hearing on the issue at which he sharply questioned NIH Director Elias Zerhouni. Zerhouni defended the administration’s policy and insisted that scientists are moving as fast as possible. He argued that there is no current need for new cell lines, while acknowledging that such a need may arise in the future. However, James Battey, the chair of the NIH Stem Cell Task Force, testified that currently the limiting factor in the progress of stem cell research is a lack of scientists trained in the techniques needed to work on stem cells. Specter criticized Zerhouni for failing to find a nongovernment scientist willing to back the administration’s position.

Specter also renewed longstanding complaints about communication problems with NIH after Zerhouni informed the subcommittee that 16 of the 78 cell lines eligible for federal funding have not been exposed to mouse cells. Specter contended that HHS Secretary Tommy Thompson had previously informed him that all of the eligible lines had been exposed to mouse cells, and he sharply rebuked Zerhouni for this “flat-out contradiction.” In the past, Thompson has faced criticism from Specter for allegedly allowing his staff to edit communications from NIH scientists to Specter’s subcommittee. The 16 uncontaminated lines are not among the 11 cell lines currently available.

One witness, John Kessler of Northwestern University Medical School, confronted Zerhouni’s statements head on. He argued that scientists should work on different techniques for developing stem cells “in parallel” but that the administration was forcing scientists to proceed “serially.” Both the technique used to derive stem cells and their genetic makeup can alter the cells’ properties in important ways, he said, creating a need for the derivation of new lines.

The administration’s stem cell policy applies only to federally funded research, so whether or not President Bush agrees to revisit the issue, privately funded researchers, such as those that produced the recent discoveries at Johns Hopkins, may continue to study new cell lines. As these researchers and scientists abroad report new discoveries, policymakers will face the continuing challenge of ensuring that U.S. policy stays up to date.

Congress considers ethical, social impact of nanotechnology research

House and Senate bills promoting nanotechnology R&D have highlighted differences in Congress concerning the ethical, legal, and social implications (commonly referred to as ELSI) of the emerging technology.

Nanotechnology research involves manipulating matter at the atomic and molecular levels. (One nanometer is a billionth of a meter.) Researchers foresee promising applications in an array of fields, including electronics, medicine, and environmental technology. Proponents of nanotechnology envision everything from improved water filtration devices for cleaning the environment to nanosized robots that can be injected into the body and deliver drugs to targeted cells. Others, however, express concern that unleashing these promising products prematurely could unintentionally lead to new health and environmental hazards that science and society will be ill-prepared to handle.

During the House Science Committee debate on the House bill, the Nanotechnology Research and Development Act of 2003, which would authorize $2.4 billion in research funds through FY 2006 in support of interagency nanotechnology activities, Reps. Brad Sherman (D-Calif.) and Chris Bell (D-Tex.) proposed devoting 5 percent of the total federal nanotechnology budget to research into ELSI issues. The 5 percent allocation mirrors a similar set-aside for ELSI programs in the interagency Human Genome Project established in 1988.

Opponents to the Sherman/ Bell amendment, including House Science committee Chairman Sherwood Boehlert (R-N.Y.) and Rep. Brian Baird (D-Wash.), argued that such a percentage was arbitrary and unnecessary in light of language in the House bill that federal programs include activities that “ensure that societal and ethical concerns will be addressed as the technology is developed.”

Other House members, including Reps. Dana Rohrabacher (R-Calif.) and Joe Barton (R-Tex.), raised the specter of establishing a “social elite” of Ph.D.s analyzing societal implications that, in their view, should be assessed by the public at large. “We don’t need a bunch of pontificators,” Rohrabacher said. He argued that it is the role of Congress to debate and determine ethical issues.

Baird, who is a Ph.D. psychologist, quickly came to the defense of social scientists. “I’d like to disassociate myself from my colleague’s comments,” he stated. “We need bright people” in order to realize the full potential of this burgeoning field. Baird said that as members of the House Science Committee “we should not demean the contributions of scientists.”

Although the Sherman/Bell amendment failed to pass, the bill as a whole passed the House 405 to 19.

Whereas the House bill gives the responsibility for defining and managing ELSI issues to the participating agencies, with coordination by an interagency committee, the Senate bill would elevate the ethical and societal components of the federal program.

The 21st Century Nanotechnology Research and Development Act, introduced by Sens. Ron Wyden (D-Ore.) and George Allen (R-Va.), would create a separate American Nanotechnology Preparedness Center funded through NSF at $5 million annually. The center would conduct studies to help decisionmakers better anticipate ELSI issues that are likely to arise as the field matures.

Another key difference between the House and Senate bills is the management and composition of an external advisory committee to provide oversight and assessment of the progress of the interagency research programs. The House bill would give responsibility to the existing President’s Council of Advisers on Science and Technology (PCAST), which would tap nanotechnology and ethical experts as appropriate. The Wyden bill, on the other hand, would create a separate stand-alone National Nanotechnology Advisory Panel, which would include such experts as part of its membership.

PCAST has already taken steps to establish its role as an oversight committee. At a June 10 meeting, it discussed a Nanotechnology Work Plan. The plan includes the creation of three task forces composed of PCAST members, each to explore a special topic: Materials/Electronics/Photonics, Energy/Environment, and Medical/Bio/Social.


“From the Hill” is prepared by the Center for Science, Technology, and Congress at the American Association for the Advancement of Science (www.aaas.org/spp) in Washington, D.C., and is based on articles from the center’s bulletin Science & Technology in Congress.

Assessing Forensic Science

In Issues’ Summer 2002 issue, we had the opportunity to introduce this journal’s readers to several topics that raise complex questions at the intersection of science and the law. That issue dealt with a trio of recent Supreme Court rulings (Daubert, Kumho, and Joiner) on the admissibility of scientific and technical expert testimony; the relationship between the legal protection of intellectual property and the advancement of scientific and technical research; public access to scientific information used in federal regulatory decisions; and balancing the protection of individual privacy and the beneficial use of medical records in public health research. Many of these topics have been analyzed further by the National Academies’ new Science, Technology, and Law Program, which we cochair.

This year, we continue our exploration of issues where science and the law converge by turning our attention to the implications of the Supreme Court’s trio of rulings on the forensic sciences. For 70 years, U.S. courts relied on the standard enunciated in Frye v. United States to determine the admissibility of expert testimony. Under Frye, expert testimony is admissible only if it is “generally accepted” in the relevant scientific community. In Daubert v. Merrell Dow Pharmaceuticals, the Supreme Court, relying on the new Federal Rules of Evidence, declared that scientific expert testimony must be grounded in the methodology and reasoning of science. To determine whether expert testimony meets the Daubert standard, the Court provided trial courts with the following criteria:

  • whether the theories or techniques on which the testimony relies are based on a testable hypothesis;
  • whether the theory or technique has been subject to peer review;
  • whether there is a known or potential rate of error associated with the method;
  • whether there are standards controlling the method; and
  • whether the method is generally accepted in the relevant scientific community.

These criteria are flexible, and no single one alone would be dispositive. Indeed, the Court recognized that some would be inappropriate under certain circumstances. A few years later, in Kumho Tire, the Supreme Court extended the Daubert standard to apply to expert testimony based on a wide range of technical or specialized disciplines while also recognizing that criteria for admission may differ across areas of expert testimony.

Recently, expert testimony based on forensic evidence has been challenged under the Frye standard, which still governs in many state courts, or under the Daubert standard in federal courts. In Ramirez v. State, the Supreme Court of Florida rejected expert testimony asserting that a knife belonging to the defendant’s girlfriend was the instrument used to inflict a fatal stab wound in the victim’s body, holding that the expert’s methods did not meet the Frye standard. In January 2002, an eminent federal district judge, Louis Pollak, using the Daubert standard, granted a defense motion to preclude expert testimony that purported to identify a specific individual on the basis of matching fingerprints. However, Judge Pollak granted a motion for reconsidering his order and ultimately allowed the prosecution to present identification testimony based on the matching fingerprint. Judge Pollak’s initial ruling, along with challenges to other kinds of forensic evidence, have increasingly led to suggestions that the scientific foundation of many common forensic science techniques may be open to question.

The increased use of DNA analysis, which has undergone extensive validation, has thrown into relief the less firmly credentialed status of other forensic science identification techniques (fingerprints, fiber analysis, hair analysis, ballistics, bite marks, and tool marks). These have not undergone the type of extensive testing and verification that is the hallmark of science elsewhere. These techniques rely on the skill of the examiner, but since the practitioners have not been subjected to rigorous proficiency testing, reliable error rates are not known.

Advances in the forensic sciences have generally emerged to address the needs of the criminal law community. Most of the research has been sponsored by federal agencies whose missions include law enforcement and prosecution, but relatively little science. Scant funding has been provided for competitive basic academic research, and very few, if any, doctoral programs in forensic sciences exist. The culture of academic research, with the free and open exchange of ideas, peer review of research findings, and rigorous disciplinary programs, has not been the norm for the forensic science community.

The challenge of Daubert should lead us to ask how scientific principles can be appropriately applied throughout the forensic sciences, how academic research in the forensic sciences can be promoted, and what the research agenda in this area should be. In the wake of Daubert, the community of forensic scientists may well be pressed to answer these questions in order to maintain their prominent role in U.S. courts.

In this issue, four articles examine the science behind forensic sciences. D. Michael Risinger and Michael J. Saks explore an issue that has already perplexed the Science, Technology, and Law Program: How does the source of funds for and the conduct of research influence the integrity of its outcome? Jennifer Mnookin provides an historical look at the increasing acceptance of fingerprint evidence in law enforcement and in society as well, and explores the implications of the recent legal challenges. David L. Faigman, Stephen E. Fienberg, and Paul C. Stern examine another technique that has long been used in the law enforcement and national security communities: polygraphs. Paul C. Giannelli reviews some of the problems in crime laboratories and makes the case for mandatory accreditation of all forensic labs.

In assessing admissibility under the Daubert standards, courts are seeking a better understanding of the scientific bases of forensic analysis. Courts are inquiring into the relative frequencies at which the identifying traits occur in the general population and the probability of a coincidental match with a crime scene sample. Courts are questioning the standards to which the experts making the identification are held; whether identification is based on objective criteria; and whether standardized minimum criteria must be met for a positive identification. Recognizing that no science consistently produces certain results, courts are also questioning the error rates associated with forensic identification techniques. In these ways, courts are actively seeking an improved understanding of the scientific basis of forensic science and of the body of research required to support expert testimony. We hope the academic and law enforcement communities will do the same.

Fingerprints: Not a Gold Standard

In January 2002, Judge Louis Pollack made headlines with a surprising ruling on the admissibility of fingerprints. In United States v. Llera Plaza, the distinguished judge and former academic issued a lengthy opinion that concluded, essentially, that fingerprint identification was not a legitimate form of scientific evidence. Fingerprints not scientific? The conclusions of fingerprint examiners not admissible in court? It was a shocking thought. After all, fingerprints have been used as evidence in the U.S. courtroom for nearly 100 years. They have long been considered the gold standard of forensic science and are widely thought to be an especially powerful and indisputable form of evidence. What could Judge Pollack have been thinking?

About six weeks later, Judge Pollack changed his mind. In an even longer opinion, he bluntly wrote, “I disagree with myself.” After a second evidentiary hearing, he had decided that despite fingerprinting’s latent defects, the opinions of fingerprint identification experts should nonetheless be admissible evidence. With this second opinion, Pollack became yet another in a long line of judges to preserve the status quo by rejecting challenges to fingerprinting’s admissibility. Since 1999, nearly 40 judges have considered whether fingerprint evidence meets the Daubert test, the Supreme Court’s standard for the admissibility of expert evidence in federal court, or the equivalent state standard. Given Pollack’s about-face, every single judge who has considered the issue has determined that fingerprinting passes the test.

And yet, Judge Pollak’s first opinion was the better one. In that opinion, after surveying the evidence, he concluded that, “fingerprint identification techniques have not been tested in a manner that could be properly characterized as scientific.” All in all, he found fingerprinting identification techniques “hard to square” with Daubert, which asks judges to serve as gatekeepers to ensure that the expert evidence used in court is sufficiently valid and reliable. Daubert invites judges to examine whether the proffered expert evidence has been adequately tested, whether it has a known error rate, whether it has standards and techniques that control its operation, whether it has been subject to meaningful peer review, and whether it is generally accepted by the relevant community of experts. Pollak found that fingerprinting flunked the Daubert test, meeting only one of the criteria, that of general acceptance. Surprising though it may sound, Pollak’s judgment was correct. Although fingerprinting retains considerable cultural authority, there has been woefully little careful empirical examination of the key claims made by fingerprint examiners. Despite nearly 100 years of routine use by police and prosecutors, central assertions of fingerprint examiners have simply not yet been either verified or tested in a number of important ways.

Consider the following

Fingerprint examiners lack objective standards for evaluating whether two prints “match.” There is simply no uniform approach to deciding what counts as a sufficient basis for making an identification. Some fingerprint examiners use a “point-counting” method that entails counting the number of similar ridge characteristics on the prints, but there is no fixed requirement about how many points of similarity are needed. Six points, nine, twelve? Local practices vary, and no established minimum or norm exists. Others reject point-counting for a more holistic approach. Either way, there is no generally agreed-on standard for determining precisely when to declare a match. Although fingerprint experts insist that a qualified expert can infallibly know when two fingerprints match, there is, in fact, no carefully articulated protocol for ensuring that different experts reach the same conclusion.

Although it is known that different individuals can share certain ridge characteristics, the chance of two individuals sharing any given number of identifying characteristics is not known. How likely is it that two people could have four points of resemblance, or five, or eight? Are the odds of two partial prints from different people matching one in a thousand, one in a hundred thousand, or one in a billion? No fingerprint examiner can honestly answer such questions, even though the answers are critical to evaluating the probative value of the evidence of a match. Moreover, with the partial, potentially smudged fingerprints typical of forensic identification, the chance that two prints will appear to share similar characteristics remains equally uncertain.

The potential error rate for fingerprint identification in actual practice has received virtually no systematic study. How often do real-life fingerprint examiners find a match when none exists? How often do experts erroneously declare two prints to come from a common source? We lack credible answers to these questions. Although some FBI proficiency tests show examiners making few or no errors, these tests have been criticized, even by other fingerprint examiners, as unrealistically easy. Other proficiency tests show more disturbing results: In one 1995 test, 34 percent of test-takers made an erroneous identification. Especially when an examiner evaluates a partial latent print—a print that may be smudged, distorted, and incomplete—it is impossible on the basis of our current knowledge to have any real idea of how likely she is to make an honest mistake. The real-world error rate might be low or might be high; we just don’t know.

Fingerprint examiners routinely testify in court that they have “absolute certainty” about a match. Indeed, it is a violation of their professional norms to testify about a match in probabilistic terms. This is truly strange, for fingerprint identification must inherently be probabilistic. The right question for fingerprint examiners to answer is: How likely is it that any two people might share a given number of fingerprint characteristics? However, a valid statistical model of fingerprint variation does not exist. Without either a plausible statistical model of fingerprinting or careful empirical testing of the frequency of different ridge characteristics, a satisfying answer to this question is simply not possible. Thus, when fingerprint experts claim certainty, they are clearly overreaching, making a claim that is not scientifically grounded. Even if we assume that all people have unique fingerprints (an inductive claim, impossible itself to prove), this does not mean that the partial fragments on which identifications are based cannot sometimes be, or appear to be, identical.

Defenders of fingerprinting identification emphasize that the technique has been used, to all appearances successfully, for nearly 100 years by police and prosecutors alike. If it did not work, how could it have done so well in court? Even if certain kinds of scientific testing have never been done, the technique has been subject to a full century of adversarial testing in the courtroom. Doesn’t this continuous, seemingly effective use provide persuasive evidence about the technique’s validity? This argument has a certain degree of merit; obviously, fingerprinting often does “work.” For example, when prints found at a crime scene lead the police to a suspect, and other independent evidence confirms the suspect’s presence at the scene, this corroboration indicates that the fingerprint expert has made a correct identification.

The history of fingerprinting suggests that without adversarial testing, limitations in research and problematic assumptions may long escape the notice of experts and judges alike.

However, although the routine and successful police use of fingerprints certainly does suggest that they can offer a powerful form of identification, there are two problems with the argument that fingerprint identification’s courtroom success proves its merit. First, until very recently fingerprinting was challenged in court very infrequently. Though adversarial testing was available in theory, in practice, defense experts in fingerprint identification were almost never used. Most of the time, experts did not even receive vigorous cross-examination; instead, the accuracy of the identification was typically taken for granted by prosecutor and defendant alike. So although adversarial testing might prove something if it had truly existed, the century of courtroom use should not be seen as a century’s worth of testing. Second, as Judge Pollack recognizes in his first opinion in Llera Plaza, adversarial testing through cross-examination is not the right criterion for judges to use in deciding whether a technique has been tested under Daubert. As Pollack writes, “If ‘adversarial’ testing were the benchmark—that is if the validity of a technique were submitted to the jury in each instance—then the preliminary role of the judge in determining the scientific validity of a technique would never come into play.”

So what’s the bottom line: Is fingerprinting reliable or isn’t it? The point is that we cannot answer that question on the basis of what is presently known, except to say that its reliability is surprisingly untested. It is possible, perhaps even probable, that the pursuit of meaningful proficiency tests that actually challenge examiners with difficult identifications, more sophisticated efforts to develop a sound statistical basis for fingerprinting, and additional empirical study will combine to reveal that latent fingerprinting is indeed a reliable identification method. But until this careful study is done, we ought, at a minimum, to treat fingerprint identification with greater skepticism, for the gold standard could turn out to be tarnished brass.

Recognizing how much we simply do not know about the reliability of fingerprint identification raises a number of additional questions. First, given the lack of information about the validity of fingerprint identification, why and how did it come to be accepted as a form of legal evidence? Second, why is it being challenged now? And finally, why aren’t the courts (with the exception of Judge Pollack the first time around) taking these challenges seriously?

A long history

Fingerprint evidence was accepted as a legitimate form of legal evidence very rapidly, and with strikingly little careful scrutiny. Consider, for example, the first case in the United States in which fingerprints were introduced in evidence: the 1910 trial of Thomas Jennings for the murder of Clarence Hiller. The defendant was linked to the crime by some suspicious circumstantial evidence, but there was nothing definitive against him. However, the Hiller family had just finished painting their house, and on the railing of their back porch, four fingers of a left hand had been imprinted in the still-wet paint. The prosecution wanted to introduce expert testimony concluding that these fingerprints belonged to none other than Thomas Jennings.

Four witnesses from various bureaus of identification testified for the prosecution, and all concluded that the fingerprints on the rail were made by the defendant’s hand. The judge allowed their testimony, and Jennings was convicted. The defendant argued unsuccessfully on appeal that the prints were improperly admitted. Citing authorities such as the Encyclopedia Britannica and a treatise on handwriting identification, the court emphasized that “standard authorities on scientific subjects discuss the use of fingerprints as a system of identification, concluding that experience has shown it to be reliable.” On the basis of these sources and the witnesses’ testimony, the court concluded that fingerprinting had a scientific basis and admitted it into evidence.

What was striking in Jennings, as well as the cases that followed it, is that courts largely failed to ask any difficult questions of the new identification technique. Just how confident could fingerprint identification experts be that no two fingerprints were really alike? How often might examiners make mistakes? How reliable was their technique for determining whether two prints actually matched? How was forensic use of fingerprints different from police use? The judge did not analyze in detail either the technique or the experts’ claims to knowledge; instead, he believed that the new technique worked flawlessly based only on interested participants’ say-so. The Jennings decision proved quite influential. In the years following, courts in other states admitted fingerprints without any substantial analysis at all, relying instead on Jennings and other cases as precedent.

From the beginning, fingerprinting greatly impressed judges and jurors alike. Experts showed juries blown-up visual representations of the fingerprints themselves, carefully marked to emphasize the points of similarity, inviting jurors to look down at the ridges of their own fingers with new-found respect. The jurors saw, or at least seemed to see, nature speaking directly. Moreover, even in the very first cases, fingerprint experts attempted to distinguish their knowledge from other forms of expert testimony by declaring that they offered not opinion but fact, claiming that their knowledge was special, more certain than other claims of knowledge. But they never established conclusively that all fingerprints are unique or that their technique was infallible even with less-than-perfect fingerprints found at crime scenes.

In all events, just a few years after Jennings was decided, the evidential legitimacy of fingerprints was deeply entrenched, taken for granted as accepted doctrine. Judges were as confident about fingerprinting as was Puddn’head Wilson, a character in an 1894 Mark Twain novella, who believed that “ ‘God’s finger print language,’ that voiceless speech and the indelible writing,” could provide “unquestionable evidence of identity in all cases.” Occasionally, Pudd’nhead Wilson itself was cited as an authority by judges.

Why was fingerprinting accepted so rapidly and with so little skepticism? In part, early 20th-century courts simply weren’t in the habit of rigorously scrutinizing scientific evidence. Moreover, the judicial habit of relying on precedent created a snowballing effect: Once a number of courts accepted fingerprinting as evidence, later courts simply followed their lead rather than investigating the merits of the technique for themselves. But there are additional explanations for the new technique’s easy acceptance. First, fingerprinting and its claims that individual distinctiveness was marked on the tips of the fingers had inherent cultural plausibility. The notion that identity and even character could be read from the physical body was widely shared, both in popular culture and in certain more professional and scientific arenas as well. Berthillonage, for example, the measurement system widely used by police departments across the globe, was based on the notion that if people’s bodies were measured carefully, they inevitably differed one from the other. Similarly, Lombrosion criminology and criminal anthropology, influential around the turn of the century, had as its basic tenet that born criminals differed from normal law-abiding citizens in physically identifiable ways. The widespread belief in nature’s infinite variety meant that just as every person was different, just as every snowflake was unique, every fingerprint must be distinctive too, if it was only examined in sufficient detail. The idea that upon the tips of fingers were minute patterns, fixed from birth and unique to the carrier, made cultural sense; it fit with the order of things.

One could argue, from the vantage point of 100 years of experience, that the reason fingerprinting seemed so plausible at the time was because its claims were true, rather than because it fit within a particular cultural paradigm or ideology. But this would be the worst form of Whig history. Many of the other circulating beliefs of the period, such as, for example, criminal anthropology, are now quite discredited. The reason fingerprinting was not subject to scrutiny by judges was not because it obviously worked; in fact, it may have become obvious that it worked in part precisely because it was not subject to careful scrutiny.

Moreover, fingerprint examiners’ strong claim of certain, incontestable knowledge made fingerprinting appealing not only to prosecutors but to judges as well. In fact, there was an especially powerful fit between fingerprinting and that which the legal system hoped that science could provide. In the late 19th century, legal commentators and judges saw in expert testimony the potential for a particularly authoritative mode of evidence, a kind of knowledge that could have been and should have been far superior to that of mere eyewitnesses, whose weaknesses and limitations were beginning to be better understood.

There should be serious efforts to test and validate fingerprinting methodologies and to develop difficult and meaningful proficiency tests for practitioners.

Expert evidence held out the promise of offering a superior method of proof—rigorous, disinterested, and objective. But in practice, scientific evidence almost never lived up to these hopes. Instead, at the turn of the century, as one lawyer griped, the testimony of experts had become “the subject of everybody’s sneer and the object of everybody’s derision. It has become a newspaper jest. The public has no confidence in expert testimony.” Experts perpetually disagreed. Too often, experts were quacks or partisans, and even when they were respected members of their profession, their evidence was usually inconsistent and conflicting. Judges and commentators were angry and disillusioned by the actual use of expert evidence in court, and often said so in their opinions. (In this respect, there are noticeable similarities between the 19th-century reaction to expert testimony and present-day responses.)

Even if experts did not become zealous partisans, the very fact of disagreement was a problem. It forced juries to choose between competing experts, even though the whole reason for the expert in the first place was that the jury lacked the expertise to make a determination for itself. Given this context, fingerprinting seemed to offer something astonishing. Fingerprinting—unlike the evidence of physicians, chemists, handwriting experts, surveyors, or engineers—seemed to offer precisely the kind of scientific certainty that judges and commentators, weary of the perpetual battles of the expert, yearned for. Writers on fingerprinting routinely emphasized that fingerprint identification could not be erroneous. Unlike so much other expert evidence, which could be and generally was disputed by other qualified experts, fingerprint examiners seemed always to agree. Generally, the defendants in fingerprinting cases did not offer fingerprint experts of their own. Because no one challenged fingerprinting in court, either its theoretical foundations or, for the most part, the operation of the technique in the particular instance, it seemed especially powerful.

The idea that fingerprints could provide definite matches was not contested in court. In the early trials in which fingerprints were introduced, some defendants argued that fingerprinting was not a legitimate form of evidence, but typically defendants did not introduce fingerprint experts of their own. Fingerprinting thus avoided the spectacle of clashing experts on both sides of a case, whose contradictory testimony befuddled jurors and frustrated judges. The evidence that a defendant’s fingerprints matched those found at the crime scene was very rarely challenged. Fingerprinting grew to have cultural authority that far surpassed that of any other forensic science and the experts’ claims of infallibility came to be believed.

Although some present-day defendants do retain a fingerprint expert of their own, what is striking, even astonishing, is that no serious effort to challenge either the weight or admissibility of fingerprint evidence ever did emerge until just a couple of years ago. One of the many consequences of DNA profiling and its admissibility into court is that it has opened the door to challenges to fingerprinting. Ironically, DNA profiling—initially called “DNA fingerprinting” by its supporters to enhance its appeal—could turn out to have planted the seeds for fingerprinting’s downfall as legal evidence.

In the earliest cases, DNA profiling was accepted almost as breathlessly and enthusiastically as fingerprinting had been 75 years earlier. But after a few courts had admitted the new DNA identification techniques, defendants began to mount significant challenges to its admissibility. They began to point out numerous weaknesses and uncertainties. First, how exactly was a DNA match defined? What were the objective criteria for declaring that two similar-looking samples matched? Second, how accurate were the assumptions about population genetics that underlay the numbers used to define the probability of a mistaken DNA match? Third, however marvelous DNA typing appeared in theory, how often did actual laboratories making actual identifications make mistakes? These are the very questions that fingerprinting has ducked for a century. In the case of DNA, defense experts succeeded in persuading judges that there were serious concerns within each of these areas, and a number of courts even excluded DNA evidence in particular cases as a result. Eventually, the proponents of DNA were able to satisfy the courts’ concerns, but there is no doubt that the so-called “DNA wars” forced the new technique’s proponents to pay greater attention to both laboratory procedures and to the scientific basis for their statistical claims than they had done at first.

Current challenges

These challenges to DNA profiling, along with the increasing focus on judicial gatekeeping and reliability that grew out of the Supreme Court’s Daubert opinion, opened the door to contemporary challenges to fingerprinting. Together, they created a climate in which fingerprinting’s limitations became more visible, an environment in which legal challenge to a well-established, long-accepted form of scientific proof was doctrinally imaginable.

First, the move toward focusing on the reliability and validity of expert evidence made fingerprinting a more plausible target. Before Daubert, the dominant standard for assessing expert evidence was the Frye test, which focused on whether a novel technique was generally accepted by the relevant scientific community. Under Frye’s approach, it would have been extremely difficult to question the long-standing technique. Of course, fingerprinting was accepted by the relevant scientific community, especially if that community was defined as fingerprint examiners. Even if the community were defined more broadly (perhaps as forensic scientists in general), it would have been nearly impossible to argue that fingerprinting was not generally accepted. After all, fingerprinting was not just generally accepted; it was universally accepted as forensic science’s gold standard. Unlike Frye, Daubert made clear that judges were supposed to make a genuine assessment of whether the substance of the expert evidence was adequately reliable.

The Daubert approach offers two significant doctrinal advantages for anyone attempting to launch a challenge to fingerprint evidence. First, the views of the relevant community are no longer dispositive but are just one factor among many. We would hardly expect polygraph examiners to be the most objective or critical observers of the polygraph or those who practice hair identification to argue that the science was insufficiently reliable. When there is challenge to the fundamental reliability of a technique through which the practitioners make their living, there is good reason to be especially dubious about general acceptance as a proxy for reliability. For a debate about which of two methods within a field is superior, the views of the practitioners might well be a useful proxy for reliability, but when the field’s very adequacy is under attack, the participants’ perspective should be no more than a starting point..

The second advantage of the Daubert approach is that it offers no safe harbor for techniques with a long history. Frye itself referenced novel scientific techniques, and many jurisdictions found that it indeed applied only to new forms of expert knowledge, not to those with a long history of use. Under Frye, this limitation made sense: If a form of evidence had been in use as legal evidence for a long while, that provided at least prima facie evidence of general acceptance. Although judges need not reexamine a form of expertise under Daubert each time it is used, if there are new arguments that a well-established form of evidence is unreliable, judges should not dismiss these arguments with a nod to history.

Daubert, then, made it imaginable that courts would revisit a long-accepted technique that was clearly generally accepted by the community of practitioners. But it was the controversies over DNA profiling that made the weaknesses in fingerprinting significantly more visible to critics, legal commentators, and defense lawyers alike. The debates over DNA raised issues that had never been resolved with fingerprinting; indeed, they practically provided a blueprint to show what a challenge to fingerprinting would look like. And the metaphoric link between the two identification techniques made the parallels only more obvious. They helped defense attorneys to recognize that fingerprinting might not fare so well if subjected to a particular kind of scientific scrutiny.

Of course, so far, fingerprinting has fared all right. Those several dozen judges who have considered the issue continue to allow fingerprint evidence even in cases involving smudged and distorted prints. What is most striking about the judicial response to date is that with the exception of Judge Pollak, trial judges faced with challenges to the admissibility of fingerprinting have not confronted the issue in any serious way. Appellate courts have also avoided the issue—with the notable exception of the 4th Circuit’s Judge Michael.

The cases reveal a striking reluctance even to admit that assessing fingerprinting under Daubert raises tricky issues. One judge, for example, wrote that, “latent print identification is the very archetype of reliable expert testimony.” Although it may be arguable that fingerprinting should be admissible under the legal standard, to argue that it is the “archetype of reliable expert testimony” is to misunderstand either the defense’s critique of fingerprinting, Daubert, or both.

I suggest that what is driving these opinions is the concern that if fingerprinting does not survive Daubert scrutiny, neither will a great deal of other evidence that we currently allow. Rejecting fingerprinting would, judges fear, tear down the citadel. It would simply place too many forms of expert evidence in jeopardy. Even though the validity of difficult fingerprint identifications may be woefully untested, fingerprint identification is almost certainly more probative than many other sorts of nonexpert evidence, including, perhaps, eyewitness testimony. But it may also be more probative than other forms of expert evidence that continue to be routinely permitted, such as physicians’ diagnostic testimony, psychological evidence, and other forms of forensic science evidence. As one judge wrote in his opinion permitting fingerprints, the error rate “is certainly far lower than the error rate for other types of opinions that courts routinely allow, such as opinions about the diagnosis of a disease, the cause of an accident or disease, whether a fire was accidental or deliberate in origin, or whether a particular industrial facility was the likely source of a contaminant in groundwater.” (Of course, this is just a hunch, for we lack empirical data about the error rates for many of these enterprises, including fingerprint identification.) A similar notion seems to have influenced Judge Pollak in his reconsideration in Llera Plaza. He emphasizes in his second opinion permitting the testimony that fingerprint evidence, although “subjective,” is no more and perhaps less subjective than many other permitted opinions by experts in court.

In addition, the judges who are assessing fingerprinting most likely believe deeply in fingerprinting. Rightly or wrongly, the technique continues to have enormous cultural authority. Dislodging such a strong prior belief will require, at a minimum, a great deal of evidence, more than the quantity needed to generate doubt about a technique in which people have less faith. One could certainly criticize these judges for burying their heads in the sand instead of executing their duties under Daubert in a responsible way. However, their reluctance to strictly apply Daubert to fingerprinting reflects a deeper and quite problematic issue that pervades assessments of expert evidence more generally. Daubert provides one vision of how to assess scientific expert evidence: with the standards of the scientific method. But surely this idealized version of the scientific method cannot be the only way to generate legitimate expert knowledge in court. If fingerprinting fails Daubert, does this suggest the limits of fingerprinting or the limits of Daubert? When judges refuse to rule on fingerprinting in careful Daubert terms, perhaps they are, knowingly or not, enacting a rebellion against the notion that a certain vision of science provides the only legitimate way to provide reliable knowledge.

Whether such a rebellion is to be admired or criticized is beyond the scope of this article. But it does suggest that the legal rule we ask judges to apply to expert evidence will not, in and of itself, control outcomes. Determinations of admissibility, no matter what the formal legal rule, will end up incorporating broader beliefs about the reliability of the particular form of evidence and about the legitimacy of various ways of knowing. Scrutiny of expert evidence does not take place in a cultural vacuum. What seems obvious, what needs to be proven, what can be taken for granted, and what is viewed as problematic all depend on cultural assumptions and shared beliefs, and these can change over time in noticeable and dramatic ways. Whatever the ostensible legal standard used, it is filtered through these shared beliefs and common practices. When forms of evidence comport with broader understanding of what is plausible, they may be especially likely to escape careful analysis as legal evidence, no matter what the formal legal standard ostensibly used to evaluate them. Although commentators have often criticized the legal system for being too conservative in admitting expert evidence, the problem may be the reverse: The quick and widespread acceptance of a new technique may lead to its deep and permanent entrenchment without sufficient scrutiny.

The second lesson that can be drawn from the state of fingerprint identification evidence is that there may be a productive aspect to the battles of expert witnesses in court. A constant leitmotif in the history of expert evidence has been the call for the use of neutral experts. Such neutral experts could prevent the jury from having to decide between different views on matters about which it lacks knowledge and could ensure that valid science comes before the tribunal. Neutral experts have been recommended so frequently as a cure for the problems of expert evidence that the only wonder is that we have in practice budged so little from our adversarial approach to expert testimony. Those who have advocated neutral experts as a solution to the difficulties of expert evidence in a lay jury system should therefore take heed from the history of fingerprinting. Early fingerprint experts were not neutral experts, in the sense that they were called by a party rather than appointed by the court, but they do provide one of our only examples of a category of expert scientific knowledge in which the typical adversarial battles were largely absent. And the history of fingerprinting suggests that without adversarial testing, limitations in research and problematic assumptions may long escape the notice of experts and judges alike. Although it is easy to disparage battles of the experts as expensive, misleading, and confusing to the factfinder, these battles may also reveal genuine weaknesses. It is, perhaps, precisely because of the lack of these challenges that fingerprinting was seen to provide secure and incontestable knowledge. Ironically, had defense experts in fingerprinting emerged from the beginning, fingerprint evidence might have lower cultural status but in fact be even more trustworthy than it is today.

Finally, to return to the practical dimension: Given fingerprinting’s weaknesses, what should be done? Clearly, more research is necessary. There should be serious efforts to test and validate fingerprinting methodologies and to develop difficult and meaningful proficiency tests for practitioners. Even in his second opinion, Judge Pollak recognizes that fingerprinting has not been adequately tested; he simply decides that he will admit it nonetheless. But until such testing proceeds, what should judges do? Should they follow Pollak’s lead and admit it in the name of not “let[ting] the best be the enemy of the good”? Especially given fingerprinting’s widespread authority, this seems highly problematic, for jurors are unlikely to understand that fingerprinting is far less tested than they assume. Should judges instead exclude it as failing Daubert? This would have the valuable side effect of spurring more research into fingerprinting identification’s reliability and is perhaps the most intellectually honest solution, for under Daubert’s criteria, fingerprinting does not fare well. The problem with exclusion is that fingerprinting, although problematic, is still probably far more probative than much evidence that we do permit, both expert and nonexpert; so it seems somewhat perverse to exclude fingerprinting while permitting, say, eyewitness testimony. Perhaps courts ought therefore to forge intermediate and temporary compromises, limiting the testimony to some degree but not excluding it completely. Judges could permit experts to testify about similarities but exclude their conclusions (this is, in fact, what Pollak proposed in his first opinion). Or they could admit fingerprinting but add a cautionary instruction. They could admit fingerprints in cases where the prints are exceptionally clear but exclude them when the specimens are poor. Any of these compromise solutions would also signal to the community of fingerprint examiners that business as usual cannot continue indefinitely; if more research and testing are not forthcoming, exclusion could be expected.

Frankly, reasonable people and reasonable judges can disagree about whether fingerprinting should be admissible, given our current state of knowledge. The key point is that it is truly a difficult question. At a minimum, judges ought to feel an obligation to take these challenges seriously, more seriously than most of them, with the exception of Judge Pollak and Judge Michael, have to date. They should grapple explicitly and transparently with the difficult question of what to do with a technique that clearly has great power as an identification tool but whose claims have not been sufficiently tested according to the tenets of science.

Humanities for Policy—and a Policy for the Humanities

Since World War II, policymakers have increasingly viewed investments in knowledge as central to achieving societal goals—unless that knowledge is in the humanities. In 2003, less than 1 percent of the $100-billion investment of public resources in knowledge is being devoted to the fields making up the humanities. If the federal budget is an accurate reflection of priorities, then policymakers view the humanities as having at best a marginal role in meeting the challenges facing our nation.

By contrast, many policymakers believe, in President Bush’s words, that “science and technology are critically important to keeping our nation’s economy competitive and for addressing challenges we face in health care, defense, energy production and use, and the environment.” This explains the overall trend in funding: Whereas federal appropriations for the National Institutes of Health (NIH) have doubled over the past six years, with a similar doubling now planned for the National Science Foundation (NSF), funding for the National Endowment for the Humanities (NEH) and the National Endowment for the Arts (NEA) have in real terms been cut by almost half since 1994. According to James Herbert of the NEH, the ratio of NSF to NEH funding has during the past two decades gone from 5:1 in 1979 to 33:1 in 1997.

This apparent consensus concerning the humanities (a tacit consensus, for few have raised the question of whether the humanities can contribute to policy in areas such as health care, defense, or the environment) is contrary to the fundamental purposes for which Congress created the NEH and NEA in 1965. The founding legislation for these agencies notes that “an advanced civilization must not limit its efforts to science and technology alone, but must give full value and support to the other great branches of scholarly and cultural activity in order to achieve a better understanding of the past, a better analysis of the present, and a better view of the future.” Remarkably, little sustained effort has been given to examining the claim that the humanities can make significant contributions to policy outcomes.

We do find modest counter-trends. Several areas of policy, such as the regulation of biotechnology, are notable for the role played by the humanities in identifying alternative courses of action and their consequences. The Human Genome Project has for more than a decade devoted between 3 and 5 percent of its funding to a research program on the ethical, legal, and social implications of its work. And in 2001, President Bush created a Council on Bioethics to “articulate fully the complex and often competing moral positions on any given issue” related to topics such as embryo and stem cell research, assisted reproduction, cloning, and end-of-life issues. Chairman Leon Kass began the council’s work by reflecting on a work of literature, Nathaniel Hawthorne’s “The Birthmark,” which explores the unintended consequences of aspirations to physical perfection.

The potential currently seen for the humanities to contribute to policy development in biotechnology is indicative of their broader potential to contribute to the development of useful knowledge in areas such as nanotechnology, homeland security, or any area where science and technology intersect with broader societal interests. We suggest that humanists interested in improving the connection of their fields with the needs of policymakers—in contrast to those who support the humanities for their intrinsic value alone—can learn from the experiences of science in the political process over the past century, as well as from those who have studied the interconnections of science and policy. These lessons indicate a need for change within the humanities, via a systematic focus on “humanities policy.” We recommend beginning with a “humanities for policy” that will lead to a new “policy for the humanities.”

Science policy trajectory

A hundred years ago science, like the humanities today, was thought to be largely irrelevant to practical affairs, at least in terms of the public resources devoted to science. The U.S. Congress only grudgingly accepted James Smithson’s gift to establish a public institution for science; and in the decades before World War II, physicists were as unemployable as philosophers. In a remarkable turnaround, by 1965 United Nations Ambassador Adlai Stevenson could suggest that science and technology were more important to policy than anything else because they “are making the problems of today irrelevant in the long run.”

Students of science policy commonly point to World War II as a major cause of this sea change. Investments in scientific research and technological development (producing such innovations as radar and the atomic bomb) were decisive in winning the war. Before the war, and despite their claims to Congress, scientists spoke lovingly of their pursuit of “pure” science: pure because the research was conducted without consideration of use and was motivated by curiosity alone, which is not so far from an attitude shared by many humanities scholars today. After the war, in a display of both their newfound relevance and emerging political astuteness, scientists requested support for “basic” research, a term that could simultaneously be interpreted by scientists as preserving their “pure” desire to know and by policymakers as the essential first step toward practical applications.

This shift was real enough, and it is easy to demonstrate that science has played a central role in any number of societal advances over the past half century. Science also clearly contributes to decisionmaking by helping to identify problems that otherwise could not be seen. To pick one example, global climate change would not even be a policy issue without science. Humans experience weather, the vagaries of local day-to-day meteorological conditions, but have little capacity to perceive climate over the decades and centuries across wide regions of the planet. We need the synoptic scope and methodological power of science if we are to make sense of events beyond localized human perceptions.

But the same science that has delivered climate change to policymakers as an issue to be resolved has been frustratingly limited in its ability to motivate effective progress with respect to the climate issue, even though policymakers have devoted considerable resources to scientific research. Could this be because the issue of human influence on the Earth system is, at its core, not simply a matter of science and technology but also of politics and ethics, not to mention aesthetics, metaphysics, and theology? If this seems even minimally plausible—for instance, if our concern with climate change is not only a matter of self-interest, but is also an expression of the intrinsic value of species and ecosystems—then responses to climate change focusing exclusively on natural and social scientific research may be missing out on precisely those complementary types of knowledge that would help the nation make good use of its $25-billion investment in climate research and expand the policy alternatives available to decisionmakers.

The vast majority of our investment in knowledge related to the climate issue has focused on developing better models of climate change to reduce uncertainties about the long-term future. But for all the assistance that science can offer, our reliance on the results of computer predictions for the fashioning of policy rests on a fundamental misreading of the meaning of scientific facts. Trying to produce more and more precise facts can become an excuse for not making good use of the facts we already have. Rather than calling for more research, which even if successful would leave us with the option of responding to a predicted future state of affairs, we could launch a public conversation about what future conditions are in best accord with our values and then use science to help us monitor our attempts to achieve these goals. The future, after all, is not something that simply happens to us; being human means that we exercise a significant degree of influence over what will happen through the choices we make. Rather than basing action primarily on predictions of the future, as if it is something outside of us and beyond our control, we might also engage in an explicit debate about the kind of future we want to have.

Make no mistake; science and technology are essential to providing knowledge about the consequences of alternative courses of action. But humanities-assisted discussions about what constitutes the good life in a global technological society are crucial to identifying desirable policy actions. Given the transformative power of science and technology, now more than ever we need humanities for policy.

Toward a policy for the humanities

Claims about the importance of the humanities are not new. Indeed, and ironically enough, the historical trajectory of the humanities has been precisely the opposite of that of the sciences. Two centuries ago, it was the liberal arts and humanities that were thought necessary for informed public debates. The most brilliant political document of modernity, the U.S. Constitution, was composed by thinkers thoroughly steeped in history, philosophy, religion, and literature. The eclipse of a public role for the humanities since the mid-20th century has been prompted by a continuing current of positivism within our culture, which has simultaneously defined quantity as the measure of reality and devalued traditional notions of the public relevance of a liberal education. The positivist tenor of our culture has also reinforced the humanities’ own drive toward hyperprofessionalization and specialization (itself evocative of the sciences) and has encouraged a deconstructive scholasticism that has managed to be at once irritating and irrelevant.

Yet in the midst of a marginalization that has been in part self-inflicted, one can find within the humanities signs of a revival of more traditional relevance. One notable example is the applied ethics movement. During the final quarter of the 20th century, a combination of scientists and philosophers brought ethics down from the clouds of meta-ethical abstraction to dwell among the scientific clinics, research laboratories, industrial applications, and technological communications networks. The rise of biomedical ethics, research ethics, environmental ethics, and computer ethics is an attempt by the humanities to help us live with the expanding powers of science and technology.

In the 1960s, the technoscientific optimism of the 1950s began to be tempered by concern about science and technology resulting in environment degradation, cultural change, and even the prospect of global annihilation. Concerned scientists and humanists, as well as a substantial number of citizens, expressed their concern in the emerging environmental movement, nuclear weapons protests, and interest in the development of “appropriate technology.” Thus, in the 1970s, NSF itself introduced the Ethics and Values in Science and Technology (EVIST) program, later renamed Ethics and Values Studies (EVS), to investigate the moral context and social implications of science and technology. And this trend continues: The proposed Nanotechnology Research and Development Act (Senate bill 2945) includes support for a new center for ethical, societal, educational, legal, and workforce issues related to nanotechnology.

But the humanities are about more than ethics, as indicated by the recent expansions of applied ethics to include other humanistic disciplines. In the teaching of biomedical ethics, for instance, works of literature are used to help future physicians appreciate the human experiences of sickness and pain. In engineering ethics, narrative case studies and the autobiographical testimonies of moral heroes have become a staple of the classroom. Recent work in environmental philosophy increasingly relies on literature, poetry, history, art, and theology as a complement to ethical analysis. And these innovations only scratch the surface of what the humanities can bring to the interface of science, technology, and society.

In his plenary talk at the 2002 Sigma Xi conference on Science and the Humanities, George Bugliarello, chancellor of Polytechnic University in New York, argued that there is an urgent need for a broader engagement with the humanities. “The crucial questions for our culture are, what is it, indeed, to be human, and how can we maintain and enhance our humanity as we develop ever more revolutionary scientific advances?” Taking on such questions can significantly add to the contributions that science might make to the betterment of society, as well as help us to recognize those questions that science cannot address.

The development of a humanities policy, complementing science policy, economic policy, health care policy, and more, should begin with a vision of an interdisciplinary humanities deeply involved with public life and especially with questions associated with science and technology. More specifically, humanities policy could:

  • Expand existing science/humanities collaborations in applied and professional ethics to include the humanities more broadly, bringing in fields such as history, literature, and philosophy as a whole. The Woodrow Wilson National Fellowship Foundation is pioneering this approach in its Humanities at Work initiative.
  • Develop practical alliances between scientists, engineers, and humanities scholars in support of public and private funding to work on topics that span the sciences and the humanities. At the University of Colorado, we are using this strategy in our New Directions in the Earth Sciences and the Humanities project ().
  • Create a program of research into humanities indicators to complement existing indicators in the sciences and engineering (such a project is currently being pursued by the American Academy of Arts and Sciences).

In our own work, we have found that public science offers a rich initial opportunity for testing the hypothesis that the humanities have the potential to make greater contributions to policy development and societal outcomes. Public science agencies offer a unique point of entry for humanities policy because of their nature as boundary institutions. Organizations such as the National Aeronautics and Space Administration, the U.S. Geological Survey, NSF, and NIH are supported because of our recognition that science can and should contribute to the public good, and conversely, that some types of knowledge are too fragile or important to be held in private hands. The humanities can serve as a bridge between public science and society, articulating the ethics and values dimension of societal challenges and integrating these dimensions with scientific information and perspectives.

Once we recognize that the humanities and the humanistic-oriented social sciences have an important role to play in our public life and policy development, we are also faced with the question of what would be a proper policy for the humanities to encourage this development. But in developing a policy for humanities, we should strive to avoid the pathologies of a linear model that begins with basic research and ends with societal benefit. For the humanities, to develop a field more relevant to policy will require more than just “basic humanities.” A new humanities needs to be integrated with not only prospective users of knowledge but also with other disciplines that seek to contribute useful knowledge to decisionmakers. As part of our nation’s collective policies for the acquisition and use of knowledge, an explicit policy for the humanities would recognize that the hyperspecialization and esotericism of contemporary humanities education must make room for a humanities focused on contributing knowledge to those grappling with the complex issues of modern society. Such is the promise of a policy for the humanities that would make these fields once again an essential part of the fabric of public life.

Oil in the Sea

“When it rains, it pours”—or so a motorist caught in a sudden storm might think while sliding into another vehicle. It is not merely the reduced visibility and the frenetic behavior of drivers in the rain that foster such mishaps; the streets also are slicker just after the rain begins to fall. Why? Because the oil and grease that are dripped, spewed, or otherwise inadvertently deposited by motor vehicles onto roadways are among the first materials to be lifted off by the rain, thereby literally lubricating the surface. Nor do matters end with making life miserable for motorists. The oil and grease washed off roads will most likely run into storm sewers and be discharged into the nearest body of water. From there, the oily materials often are carried to the sea, where they can cause a host of environmental problems.

These events on a rainy day in the city illustrate an important but often overlooked route by which petroleum finds its way into coastal waters. Shutting down this and other routes presents a pressing challenge. True, the nation is doing a better job than ever of keeping oil out of the marine environment. But much work remains. We need to better understand the various pathways by which oil gets into the environment, how it behaves when it gets there, what effects it has on living organisms, and, perhaps most important, what steps can be taken to further reduce the amount of petroleum that enters the nation’s and the world’s oceans.

Sources and problems

Approximately 75 million gallons of petroleum find their way into North America’s oceans each year, according to Oil in the Sea III: Inputs, Fates, and Effects, a report issued in 2002 by the National Research Council (NRC). About 62 percent of the total—roughly 47 million gallons per year—derives naturally from seepages out of the ocean floor. The rest comes from human activities.

Contrary to common belief, the bulk of human-related inputs is not due to large-scale spills and accidents that occur during the transport of crude oil or petroleum products. Indeed, these types of releases account for only about 10 percent of the oil that reaches the sea as a result of human activity. The other 90 percent comes in the form of chronic low-level releases associated with the extraction and consumption of petroleum. Within this category, the biggest problem is nonpoint source pollution. Rivers and streams that receive runoff from a variety of land-based activities deliver roughly 16 million gallons of oil to North American coastal waters each year, more than half of the total anthropogenic load. The loads are most obvious in watersheds that drain heavily populated areas. Other sources of oil that turns up in the marine environment include jettisoned aircraft fuel, marine recreational vehicles, and operational discharges, such as cargo washings and releases from petroleum extraction.

There is at least some good news. Less oil is now entering the oceans as compared to the levels found in a previous NRC report issued in 1985. Some of this change may be attributable to differences between methodologies used in the two reports, but some decreases are due to improved regulations regarding how oil is produced and shipped. Spills from vessels in North American waters from 1990 through 1999 were down by nearly two-thirds compared to the prior decade. There also has been a dramatic decline in the amount of oil released into the environment during exploration for and production of petroleum and natural gas. Still, the recent NRC report concludes that despite such progress, the damage from oil in the marine environment is considerably more pervasive and longer-term than was previously understood.

Oil in the sea, whether from catastrophic spills or chronic releases, poses a range of environmental problems. Major spills receive considerable public attention because of the obvious attendant environmental damage, including oil-coated shorelines and dead or moribund wildlife, especially among seabirds and marine mammals. The largest oil spill in U.S. waters occurred on March 24, 1989, when the tanker Exxon Valdez, en route from Valdez, Alaska, to Los Angeles, California, ran aground on Bligh Reef in Prince William Sound, Alaska. Within six hours of the grounding, the ship spilled approximately 10.9 million gallons of crude oil, which would eventually affect more than 1,100 miles of coastline. Large numbers of animals were killed directly, including an estimated 900 bald eagles, 250,000 seabirds, 2,800 sea otters, and 300 harbor seals.

Oil pollution also can have more subtle biological effects, caused by the toxicity of many of the compounds contained in petroleum or by the toxicity of compounds that form as the petroleum degrades over time. These effects may be of short duration and limited impact, or they may span long periods and affect entire populations or communities of organisms, depending on the timing and duration of the spill and the numbers and types of organisms exposed to the oil.

Of course, oil spills need not be large to be hazardous to marine life. Even a small spill in an ecologically sensitive area can result in damage to individual organisms or entire populations. A spill’s influence also depends on the type and amount of toxins present in the petroleum product released. For instance, the fuel oil leaked when the tanker Prestige broke apart off the northwest coast of Spain in 2002 was initially more toxic than the crude oil spilled from the Exxon Valdez.

One major problem with all spills, no matter their size or type, is that the oil can remain in the environment for a long time. Several lines of evidence point to continued exposure of marine organisms to oil spilled by the Exxon Valdez. Substantial subsurface oil beneath coarse beaches in the spill area was found in the summer of 2001. The oil was still toxic and appeared to be chemically unchanged since its release more than a decade earlier. Some researchers predict that oil beneath mussel beds in the region affected by the spill may not decline to background levels for at least another two decades. In another instance, scientists studying salt marshes in Massachusetts that had been covered by fuel oil spilled from the barge Florida 30 years ago recently reported that oil is still present in sediments at depths of 6 to 28 centimeters. Moreover, the concentrations of oil found in the sediments are similar to those observed shortly after the spill. The researchers, from the Woods Hole Oceanographic Institution, predict that the compounds may remain there indefinitely, while crabs and other intertidal organisms continue to burrow through the oil-contaminated layer.

Prescribed remedies

Reducing the threat of oil in the oceans will require blocking the routes by which oil enters the environment. Focusing on inputs from spills and nonpoint sources, two of the major anthropogenic contributors, will show the range of actions needed.

Reducing spills. Worldwide, large spills resulting from tanker accidents are down considerably from the totals reported by the NRC in 1985—they have decreased to 17 million gallons annually from 140 million gallons annually. This gain was achieved even as the size of the global tanker fleet increased by 900 vessels, to a total of 7,270 in 1999. Progress was made through the implementation of numerous regulations and by technological advances in vessel construction, including the increased use of double-hull tankers, the use of new construction materials, and improvements in vessel design. Spills larger than 50,000 gallons now represent less than 1 percent of total spills by number but are responsible for more than 80 percent of the total spill volume. It is important to note, however, that more than half of all tanker spills now occur in North American waters. Although the number and size of spills in these waters have been reduced considerably during the past two decades, with total volume falling to 2.5 million gallons per year, they remain the dominant domestic source of oil input to the marine environment from petroleum transportation activities, as they are globally.

More must be done to prevent potentially large spills from aging pipelines and abandoned coastal facilities, such as in Louisiana.

Prevention, in the form of stricter regulations for tankers, has obviously not prevented all large spills, as the Prestige spill so dramatically illustrated. Irreparably damaged by a storm, the ship spilled nearly 3 million gallons of fuel oil, which spread over 125 miles of coastline in one of Spain’s leading areas of commercial fishing and shellfishing. But out of this calamity may come improved policies. The spill has highlighted concerns about older, single-hull ships (the Prestige was 26 years old) that are due to be phased out by 2015, and about what Europe should do to keep these ships safe and inspected in the meantime. Under a proposal made by the European Commission in December 2002, which the European Union (EU) is expected to adopt, single-hull oil tankers will not be allowed to carry heavy grades of oil in EU waters. Prohibited grades will include heavy fuel oil, heavy crude oil, waste oils, bitumen, and tar. Questions also have been raised about single-hull ships that are bypassing EU ports in order to avoid tough new EU-mandated inspection rules adopted in 1999 after the Erika oil spill polluted 250 miles of French shoreline. The EU regulations before the passage of the latest restrictions on transport in EU waters required port authorities to check at least 25 percent of all ships coming into dock, starting with older, single-hull vessels, with priority going to ships flying flags of convenience or registered in countries with lax safety, labor, or tax rules.

The potential for a large tanker spill, however, is still significant, especially in regions without stringent safety procedures and maritime inspection practices. Furthermore, tanker traffic is expected to grow over the coming decades, as the centers of oil production continue to migrate toward the Middle East, Russia, and former Soviet states. U.S. agencies should expand their efforts to work with ship owners, domestically and internationally, through the International Maritime Organization, to enforce and build on the international regulatory standards that have contributed to the recent decline in oil spills and operational discharges.

Tankers are not the only potential source of large spills. There also is concern about aging oil pipelines and other coastal facilities. The aging of the infrastructure in fields in the central and western Gulf of Mexico and in some areas of Alaska is especially disconcerting, because these facilities often lie near sensitive coastal areas. Many pipelines in coastal Louisiana that should be buried no longer are. Numerous wellheads and other facilities within the estuaries are being abandoned, as one company takes over facilities from another. As the resources become depleted, the cost of extraction exceeds the profit to be gained from sale of the product, and owners file bankruptcy and abandon holdings. Federal agencies should work with state environmental agencies and industry to evaluate the threat posed by aging pipelines and abandoned facilities, and to take steps to minimize the potential for spills.

Reducing nonpoint source inputs. Regulations developed under the Clean Water Act of 1972 have significantly reduced the number and amount of pollutants coming from the end of a pipe, and the Toxics Release Inventory tracks many of the pollutants that are released. But the more diffuse sources, such as urban runoff, atmospheric deposition, and watershed drainage, are not regulated or even monitored adequately. Unfortunately, nonpoint source runoff is difficult to measure and sparsely sampled; as a result, estimates have a high degree of uncertainty. Clearly, new federal, state, and local partnerships are needed to monitor runoff and to keep better track of how much petroleum and other pollutants industry and consumers are releasing.

Such a call for increased monitoring will undoubtedly elicit groans from managers and other people responsible for water quality, yet few people would argue that existing efforts are adequate for the overall task. Even the better-funded federal efforts are insufficient. The National Stream Quality Accounting Network, the national network operated by the U.S. Geological Survey to monitor water quality in streams, has increasingly fewer stations, particularly along the coast, as budgets are tightened and tightened again. Additional funding is necessary to invigorate this program. Coastal stations that have been shut down need to be restored, and new stations along the coast and at critical inland locations need to be added. The network also must expand monitoring to include total hydrocarbons (instead of merely “oil and grease,” as is now the case), as well as a particular class of compounds called polynuclear aromatic hydrocarbons (PAHs). Growing evidence indicates that even at very low concentrations, PAHs carried in crude oil or refined products can have adverse effects on biota. This suggests that PAHs released from chronic sources may be of greater concern than previously recognized, and that in some instances the effects of petroleum spills may last longer than expected.

There also is a great need for expanded basic research. The most significant unanswered questions remain those regarding the effects on ecosystems of long-term, chronic, low-level exposures resulting from petroleum discharges and spills caused by development activities. Federal agencies, especially the Environmental Protection Agency (EPA), the U.S. Geological Survey, and the National Oceanic and Atmospheric Administration, should work with academia and industry to develop and implement a major research effort to more fully understand and evaluate the risk posed to organisms and the marine environment by the chronic release of petroleum, especially the cumulative effects of chronic releases and multiple types of hydrocarbons.

Apace with advances in monitoring and research, positive steps can be taken to implement proven methods for reducing nonpoint source discharges of oil into the environment. Remember, for example, the motorist brought low by slick pavement. The oil and grease that wash off highways during rain storms usually bypass sewage treatment plants in storm water overflow systems that pump the rain and any materials caught up in the flow directly to the closest body of water. In many urban settings, this source can be a significant contribution of petroleum to the ocean. As the population of coastal regions increases, urban runoff will become more polluted because of the expansion in the numbers of cars, asphalt-covered highways and parking lots, municipal wastewater loads, and the use and improper disposal of petroleum products. Collection and treatment of storm water overflows is necessary to control these inputs. Improved landscape management and urban management, increasing use of fuel-efficient vehicles, and public education can all contribute to lessening petroleum runoff.

The power of public education can be seen in the Chesapeake Bay region. In small but effective ways, people living within the bay’s watershed are reminded daily of their consumptive uses of pollutants that enter the water. They see license plates that proclaim “Save the Bay” and storm water drain covers that say “Don’t Dump. Drains to Chesapeake Bay.” Education is a first step for a better-informed public that will recognize the need for less consumption, less pollution, and better conservation of resources. This knowledge should but does not always lead to legislation and funding for reducing pollutant loads, including oil reaching the sea.

Additional remedial actions should target the recreational watercraft that have grown so popular during the past two decades. Most of these craft, including jet skis and small boats with outboard motors, use two-stroke engines, which release up to 30 percent of their fuel directly into the water. Collectively, these watercraft contribute almost 1 million gallons of petroleum each year into North American waters. The bulk of their input is in the form of gasoline, which is thought to evaporate rapidly from the water surface. However, little is known about the actual fate of the discharge, or about its biological effects while in its volatile phase, which is highly toxic. In 1990, heightened awareness of this problem led the EPA to begin regulating the “nonroad engine” population, under the authority of the Clean Air Act. Questions remain regarding the amount of petroleum residing in the water column or along the surface for biologically significant lengths of time. The EPA should continue its phase-out efforts directed at two-stroke engines, and it should expand research, in conjunction with other relevant federal agencies, on the fate and effects of discharges from these older, inefficient motors.

To achieve maximal effectiveness, efforts to understand and minimize oil pollution should pay heed to worldwide needs. The United States and other developed countries have invested much in technologies to reduce the spillage of oil into the marine environment, as well as in the science that has increased knowledge of the effects of spilled oil, either acute or chronic. This knowledge should be transferred to people in other countries that are developing their petroleum reserves. It is imperative that the petroleum industry not simply comply with regulations in developing countries where they operate, but that they transfer the knowledge derived from extensive studies in U.S. waters to areas where their operations are expanding.

Shared responsibilities

Clearly, new partnerships are needed to monitor runoff and to keep better track of how much petroleum and other pollutants are being released.

It is tempting to blame the oil and shipping industries alone for spills such as those from the Exxon Valdez and the Prestige, but everyone who benefits from oil bears responsibility for the fraction that enters the sea. If companies have failed to build and buy double-hull tankers, it is in part because consumers do not wish to pay the increased fuel prices that would be needed to offset the extra cost. The push for improved methods of extracting, producing, and transporting oil must come from the general public, and this link reinforces the need for education.

The price of oil and natural gas is a major force in the world economy. As recently as the late 1990s, the average price of a barrel of crude oil was less than the cost of a takeout dinner. Yet a fluctuation of 20 or 30 percent in the price can influence automotive sales, travel decisions, interest rates, stock market trends, and the gross national products of industrialized nations. Perceived or real decreases in the availability of oil led to long lines for gasoline in the early 1970s and to the development and sale of fuel-efficient vehicles. Many observers argue that the low oil prices in recent years have helped put a glut of gas-guzzling vehicles on the highway. As the prices of a barrel of oil and a gallon of gasoline continue to rise in the face of social unrest in South America and the unrelenting hostilities in the Middle East, the value of this limited commodity may become more apparent.

The United States needs an energy policy that treats petroleum as a limited and treasured commodity that encourages conservation rather than waste, and that supports alternative energy development and use. The nation also should tighten controls on air and water pollution and should adequately fund environmental monitoring of water resources. Without these policy changes, U.S. citizens cannot expect wise and environmentally sound use of the nation’s or the world’s resources. And if U.S. citizens cannot reduce their overly consumptive use of petroleum to help in curbing its introduction to the diffuse but voluminous nonpoint source pollutant load, how can they expect citizens of developing countries to strive for less polluting and consumptive actions in the face of an improving economy derived from sales of petroleum?

“We’re all in this together,” as the saying goes, including the motorist who crashed on that oil-slicked road. There is just one global economy, one global ecosystem, and one global source of nonrenewable petroleum reserves.

University-Related Research Parks

University-Related Research Parks

A university-related research park is a cluster of technology-based organizations (consisting primarily of private-sector research companies but also of selected federal and state research agencies and not-for-profit research foundations) that locate on or near a university campus in order to benefit from its knowledge base and research activities. A university is motivated to develop a research park by the possibility of financial gain associated with technology transfer, the opportunity to have faculty and students interact at the applied level with research organizations, and a desire to contribute to regional economic growth. Research organizations are motivated by the opportunity for access to eminent faculty and their students and university research equipment, as well as the possibility of fostering research synergies.

Research parks are an important infrastructure element of our national innovation system, yet there is no complete inventory of these parks, much less an analysis of their success. The following figures and tables, derived from research funded by the National Science Foundation, provides an initial look at the population of university-related research parks and factors associated with park growth.

Park creation

The oldest parks are Stanford Research Park (Stanford University in California, 1951) and Cornell Business and Technology Park (Cornell University in New York, 1952). Even though by the 1970s there was general acceptance of the concept of a park benefiting both research organizations and universities, park creation slowed at this time because a number of park ventures failed and an uncertain economic climate led to a decline in total R&D activity. The founding of new parks increased in the 1980s in response to public policy initiatives that encouraged additional private R&D investment and more aggressive university technology transfer activities. Economic expansion in the 1990s spurred another wave of new parks.


Wide distribution

States with the most university research activity have the largest number of parks, but this has not been a simple cause-and-effect relationship. State and university leadership has historically been a critical motivating factor for developing parks.


Key characteristics

Most parks are related to a single university and are located within a few miles of campus, but are not owned or operated by the university. About one-half of the parks were initially established with public funds. As parks have grown, the technologies represented at parks have expanded, and incubator facilities have been established. Park size varies considerably. Research Triangle Park (Duke University, North Carolina State University, and the University of North Carolina; 1959) has 37,000 employees on a 6,800-acre site. Research and Development Park (Florida Atlantic University, 1985) has 50 employees on a 52-acre site.

Selected Characteristics of University-Related Research Parks

Percentage of parks formally affiliated with multiple universities 6%
Percentage of parks owned and operated by a university 35.4%
Percentage of parks on or adjacent to a university campus 24.6%
Distances (miles) from a park to a university campus Mean: 5.7

Range: 0—26

Percentage of parks located in distressed urban areas or abandoned public-sector areas 11%
Percentage of parks initially funded with public money 50.4%
Percentage of parks with a single dominant technology 37.7%
Distribution of dominant technologies among parks with a dominant technology
Bioscience 48.5%
Information technology 42.4%
All other technologies 9.1%
Percentage of parks with an incubator facility 62.3%
Park size Mean employees: 2,740

Range: 30—37,000

Mean acres: 552

Range: 6—6,800

Growth factors

Many park directors associate park employment growth with park success, and this table compares the growth rates of parks having certain characteristics with the average rate for all parks. Parks with a single dominant technology, located very close to campus and managed by private-sector organizations, are the faster-growing parks. The fastest-growing newer park is the University of Arizona Science and Technology Park (1995), which has been adding an average of more than 1,100 employees per year. The fastest-growing of the older parks is Research Triangle Park, which has been adding an average of almost 950 employees per year since its founding in 1959.

Park Characteristics that Affect Annual Park Growth,

Measured in Terms of Park Employees,

Since Date of Establishment,

Averaged over the Population of University-Related Research Parks

Annual rate of park growth, averaged over the population ofuniversity-related research parks 13.0% per year
Parks with a single dominant technology Grow 3.2% faster than the average, per year
Off-campus parks (evaluated at the mean distance from the university) Grow 3.7% slower than the average, per year
Parks that are university-owned and -operated Grow 6.7% slower than the average, per year
An incubator facility Has no effect on park growth