Cooperation with China

In the wake of China’s crackdown on pro-democracy demonstrators in Tiananmen Square on June 4, 1989, the country’s progress on important fronts seemed to be in jeopardy. Many U.S. observers worried that China’s nascent economic reform, reliance on its scientific community, and movement toward

greater intellectual openness and international cooperation had come to a halt. The 1990s, however, saw dramatic Chinese progress in science, technology, education, and economic reform. Some positive political developments occurred as well, but severe restrictions on human rights and the free exchange of ideas and information endure, in part as a result of the Tiananmen demonstrations.

The scientific realm continues to be hampered by ambiguous and opaque regulations concerning the sharing of information. As a result, Chinese researchers, particularly in the social sciences, shy away from certain research topics or international collaborations, and intellectual exchange suffers. The initial reluctance of physicians and officials to share what they knew about severe acute respiratory syndrome clearly illustrates the liabilities of China’s restrictive system.

The growing power of both China and the United States has raised the stakes of cooperation and poses new challenges in managing the relationship. In the aftermath of the Soviet Union’s collapse, the United States enjoys greater freedom to pursue its international objectives, which will not always coincide with China’s interests. China’s growing economic prowess, coupled with its still disappointing performance in human rights and political openness, makes some observers wary of U.S. cooperation. They fear that the result will only be to strengthen a formidable economic competitor and political adversary. I believe that the benefits of cooperation outweigh the risks. At the same time, though, I see a need to increase our efforts to build greater trust and communication as a foundation for future cooperation.

Scientific cooperation is no longer simply an element of our policy to create a fabric of relations with China. Its importance has grown in several ways. First, China’s capabilities have reached a level where the scientific payoffs of cooperation can benefit not only China but also the rest of the world. Second, cooperation is key to building China’s capacity for technological innovation. Although this will strengthen an emerging competitor, it must be understood that if a country of China’s magnitude fails to become a thriving player in the global, knowledge-based economy–and soon–the economic, political, and human consequences for the entire world could be disastrous. Finally, thanks to their prestige, China’s scientists and engineers are powerful agents of change; international cooperation strengthens their ability to encourage intellectual freedom and the exchange of knowledge.

In strengthening our scientific ties with China, it is important to realize that more is at stake than scientific knowledge. Cooperation can have a broad impact on our mutual understanding. In its quest for integration into the global economic system and the global scientific enterprise, China is open to acquiring a deeper understanding of the United States and other systems. There may also be lessons for us to learn from China’s vigorous experiments to improve its capacity for technological innovation. Cooperation in science increases our knowledge of each other’s systems; conversely, a better appreciation of our respective values can help us identify and remove obstacles to productive cooperation.

Many topics lend themselves to fruitful exchanges, including the treatment of intellectual property, approaches to human subjects and genetic research, attracting precollege students into scientific careers, and popularizing science. The joint exploration of subjects such as research financing, access to and dissemination of scientific information, and the interaction of the scientific community with policymakers can lead to broader questions of political processes and cultural norms. An example of what might be done on a broader scale is sustained policy dialogues. Since 1999, the U.S. National Science Foundation (NSF) and its Chinese equivalent have sponsored discussions between Chinese and U.S. scientists and policymakers as a complement to the agencies’ support of research collaborations. The time is also right to encourage joint in-depth comparative policy studies with China’s emerging community of policy researchers.

Although cooperation is most easily negotiated and managed bilaterally, a bilateral partnership is also more vulnerable to misunderstanding and mistrust. Given the unprecedented global power of the United States and the growing strength of China, it is more important than ever to ensure the stability of our scientific partnership. We should therefore complement our bilateral arrangements with an equivalent portfolio of multilateral partnerships. The goal of encouraging Chinese political and cultural change through scientific cooperation is more likely to be reached by helping China become more comfortable with multinational norms and standards than by applying pressure unilaterally.

China is already a constructive member of international organizations ranging from the International Council for Science to UNESCO. It also participates in ad hoc structures that engage multilaterally in research (the international rice genome project, for example) and scientific advice (the Inter-Academy Panel), and helps support various NATO-like advanced study institutes with NSF and other partners in the Asia-Pacific region. More such ad hoc partnerships, large or small, would be beneficial.

Another payoff of closer scientific ties is that they will allow both countries to capitalize on the potential of U.S.-trained Chinese scientists and engineers. The flow of Chinese students and scholars to the United States during the past 20 years has benefited both countries. The United States has gained a critical influx of talent and, to the extent that the researchers return home, China has received an injection of scientists and engineers who are not only trained at the frontiers of knowledge but familiar with the world’s most productive system of research and technological innovation. Both countries have a stake in the continuation of this process.

China is meeting its rapidly growing need for scientific and technical workers in part by aggressively expanding its educational system. Incentive programs and the growth of technology-based joint ventures have attracted home about a third of those trained overseas, but it is unlikely that these efforts will be sufficient. It will take time for China to become more attractive to its foreign-trained scientists and engineers who have tasted the professional and personal rewards of a competitive and open society.

In the meantime, however, foreign-trained workers may be able to contribute to China’s scientific enterprise part time or intermittently as transnational researchers. Such arrangements can benefit all parties: The individual contributes to China’s development while continuing to enjoy the advantages of remaining within our system; China has access to researchers whose value is higher because they are still connected to the U.S. enterprise; and the United States retains U.S.-trained talent, at least for part of the time.

We should encourage this emerging pattern of trans-Pacific mobility. U.S.-trained Chinese scientists and engineers function effectively in both cultures. Hence they are in a unique position to build mutual understanding of our respective systems; raise the level of trust that underlies cooperation; enhance cross-fertilization between our two large scientific communities; and, most critically, accelerate change across a broad swath of Chinese politics and culture.

Of course, capitalizing on this opportunity is not without obstacles. China must continue its support for the international mobility of its scientists and engineers. More important, it needs to further depoliticize its research and education environment to suit those who have lived in an open society. On the U.S. side, national and homeland security considerations have made transborder mobility more complex. As security procedures are applied, we must ensure that they take into consideration all aspects of our national interest, including the considerable benefits of scientific cooperation with China.

Restructuring the U.S. Health Care System

The past two decades have seen major economic changes in the health care system in the United States, but no solution has been found for the basic problem of cost control. Per-capita medical expenditures increased at an inflation-corrected rate of about 5 to 7 percent per year during most of this period, with health care costs consuming an ever-growing fraction of the gross national product. The rate of increase slowed a little for several years during the 1990s, with the spread of managed care programs. But the rate is now increasing more rapidly than ever, and control of medical costs has reemerged as a major national imperative. Failure to solve this problem has resulted in most of the other critical defects in the health care system.

Half of all medical expenditures occur in the private sector, where employment-based health insurance provides at least partial coverage for most (but by no means all) people under age 65. Until the mid-1980s, most private insurance was of the indemnity type, in which the insurer simply paid the customary bills of hospitals and physicians. This coverage was offered by employers as a tax-free fringe benefit to employees (who might be required to contribute 10 to 20 percent of the cost as a copayment), and was tax-deductible for employers as a business cost. But the economic burden and unpredictability of ever-increasing premiums caused employers ultimately to abandon indemnity insurance for most of their workers. Companies increasingly turned to managed care plans, which contracted with employers to provide a given package of health care benefits at a negotiated and prearranged premium in a price-competitive market.

Managed care has failed in its promise to prevent sustained escalation in costs.

When the Clinton administration took office in 1993, one of its first initiatives was an ambitious proposal to introduce federally regulated competition among managed care plans. The objective was to control premium prices while ensuring that the public had universal care, received quality care, and could choose freely among care providers. It was hoped that all kinds of managed care plans, including the older not-for-profit plans as well as the more recent plans offered by investor-owned companies, would be attracted to the market and would want to compete for patients on a playing field kept level by government regulations.

But this initiative was sidetracked before even coming to a congressional vote. There was strong opposition from the private insurance industry, which saw huge profit-making opportunities in an unregulated managed care market but not under the Clinton plan. Moreover, the proposed plan’s complexity and heavy dependence on government regulation frightened many people–including the leaders of the American Medical Association–into believing it was “socialized medicine.”

The failure of this initiative delivered private health insurance into the hands of a new and aggressive industry that made enormous profits by keeping the lid on premiums while greatly reducing its expenditures on medical services–and keeping the difference as net income. This industry referred to its expenditures on care as “medical losses,” a term that speaks volumes about the basic conflict between the health interests of patients and the financial interests of the investor-owned companies. But, in fact, there was an enormous amount of fat in the services that had been provided through traditional insurance, so these new managed care insurance businesses could easily spin gold for their investors, executives, and owners by eliminating many costs. They did this in many different ways, including denial of payment for hospitalizations and physicians’ services deemed not medically essential by the insurer. The plans also forced price discounts from hospitals and physicians and made contracts that discouraged primary care physicians from spending much time with patients, ordering expensive tests, or referring patients to specialists. These tactics were temporarily successful in controlling expenditures in the private sector. Fueled by the great profits they made, managed care companies expanded rapidly. It then consolidated into a relatively few giant corporations that enjoyed great favor on Wall Street, and quickly came to exercise substantial influence over the political economy of U.S. health care.

The other half of medical expenditures is publicly funded, and this sector was not even temporarily successful in restraining costs. The government’s initial step was to adopt a method of reimbursing hospitals based on diagnostic related groupings (DRGs). Rather than paying fees for each hospital day and for individual procedures, the government would pay a set amount for treating a patient with a given diagnosis. Hospitals were thus given powerful incentives to shorten stays and to cut corners in the use of resources for inpatient care. At the same time, they encouraged physicians to conduct many diagnostic and therapeutic procedures in ambulatory facilities that were exempt from DRG-based restrictions on reimbursement.

Meanwhile, the temporary success of private managed care insurance in holding down premiums–along with its much-touted (but never proven) claims of higher quality of care–suggested to many politicians that government could solve its health care cost problems by turning over much of the public system to private enterprise. Therefore, states began to contract out to private managed care plans a major part of the services provided under Medicaid to low-income people. The federal government, for political reasons, could not so cavalierly outsource care provided to the elderly under Medicare, but did begin to encourage those over 65 to join government-subsidized private plans in lieu of receiving Medicare benefits. For a time, up to 15 percent of Medicare beneficiaries chose to do so, mainly because the plans promised coverage for outpatient prescription drugs, which Medicare did not provide.

What about attempts to contain the rapidly rising physicians’ bills for the great majority of Medicare beneficiaries who chose to remain in the traditional fee-for-service system? The government first considered paying doctors through a DRG-style system similar to that used for hospitals, but this idea was never implemented; and in 1990, a standardized fee schedule replaced the old “usual and customary” fees. Physicians found a way to maintain their incomes, however, by disaggregating (and thereby multiplying) billable services and by increasing the number of visits; and Medicare’s payments for medical services continued to rise.

Cost-control efforts by for-profit managed care plans and by government have diminished the professional role of physicians as defenders of their patients’ interests. Physicians have become more entrepreneurial and have entered into many different kinds of business arrangements with hospitals and outpatient facilities, in an effort not only to sustain their income but also to preserve their autonomy as professionals. Doctor-owned imaging centers, kidney dialysis units, and ambulatory surgery centers have proliferated. Physicians have acquired financial interests in the medical goods and services they use and prescribe. They have installed expensive new equipment in their offices that generates more billing and more income. And, in a recent trend, groups of physicians have been investing in hospitals that specialize in cardiac, orthopedic, or other kinds of specialty care, thus serving as competition for community-based general hospitals for the most profitable patients. Of course, all of these self-serving reactions to the cost-controlling efforts of insurers are justified by physicians as a way to protect the quality of medical care. Nevertheless, they increase the costs of health care, and they raise serious questions about financial influences on professional decisions.

In the private sector, managed care has failed in its promise to prevent sustained escalation in costs. Once all the excess was squeezed out, further cuts could only be achieved by cutting essentials. Meanwhile, new and more expensive technology continues to come on line, inexorably pushing up medical expenditures. Employers are once again facing a disastrous inflation in costs that they clearly cannot and will not accept, and they are cutting back on covered benefits and shifting more costs to employees. Moreover, there has been a major public backlash against the restrictions imposed by managed care, forcing many state governments to pass laws that prevent private insurers from limiting the health care choices of patients and the medical decisions of physicians. The courts also have begun to side with complaints that managed care plans are usurping the prerogatives of physicians and harming patients.

In the public sector, a large fraction of those Medicare beneficiaries who chose to shift to managed care are now back with their standard coverage, either because they were dissatisfied and chose to leave their plans or because plans have terminated their government contracts for lack of profit. The unchecked rise in expenditures on the Medicaid and Medicare programs is causing government to cut back on benefits to patients and on payments to physicians and hospitals. Increased unemployment has reduced the numbers of those covered by job-related insurance and thus has expanded the ranks of the uninsured, which now total more than 41 million people. Reduced payments have caused many physicians to refuse to accept Medicaid patients. Some doctors are even considering whether they want to continue taking new elderly patients into their practices who do not have private Medigap insurance to supplement their Medicare coverage.

Major changes needed

What will the future bring? The present state of affairs cannot continue much longer. The health care system is imploding, and proposals for its rescue will be an important part of the national political debate in the upcoming election year. Most voters want a system that is affordable and yet provides good-quality care for everyone. Some people believe that modest, piecemeal improvements in the existing health care structure can do the job, but that seems unlikely. Major widespread changes will be needed.

Those people who think of health care as primarily an economic commodity, and of the health care system as simply another industry, are inclined to believe in market-based solutions. They suggest that more business competition in the insuring and delivering of medical care, and more consumer involvement in sharing costs and making health care choices, will rein in expenditures and improve the quality of care. However, they also believe that additional government expenditures will be required to cover the poor.

Those people who do not think that market forces can or should control the health care system usually advocate a different kind of reform. They favor a consolidated and universal not-for-profit insurance system. Some believe in funding this system entirely through taxes and others through a combination of taxes and employer and individual contributions. But the essential feature of this idea is that almost all payments should go directly to health care providers rather than to the middlemen and satellite businesses that now live off the health care dollar.

A consolidated insurance system of this kind–sometimes called a single-payer system–could eliminate many of the problems in today’s hodgepodge of a system. However, sustained cost control and the realignment of incentives for physicians with the best interests of their patients will require still further reform in the organization of medical care. Fee-for-service private practice, as well as regulation of physician practices by managed care businesses, will need to be largely replaced by a system in which multispecialty not-for-profit groups of salaried physicians accept risk-free prepayment from the central insurer for the delivery of a defined benefit package of comprehensive care.

Such reform, seemingly utopian now, may eventually gain wide support as the failure of market-based health care services to meet the public’s need becomes increasingly evident, and as the ethical values of the medical profession continue to erode in the rising tide of commercialism.

Maglev Ready for Prime Time

Putting Maglev on Track” (Issues, Spring 1990) observed that growing airline traffic and associated delays were already significant and predicted that they would worsen. The article argued that a 300-mile-per-hour (mph) magnetic levitation (maglev) system integrated into airport and airline operations could be a part of the solution. Maglev was not ready for prime time in 1990, but it is now.

As frequent travelers know, air traffic delays have gotten worse, because the airport capacity problem has not been solved. As noted in the Federal Aviation Administration’s (FAA’s) 2001 Airport Capacity Enhancement Plan: “In recent years growth in air passenger traffic has outpaced growth in aviation system capacity. As a result, the effects of adverse weather or other disruptions to flight schedules are more substantial than in years past. From 1995 to 2000, operations increased by 11 percent, enplanements by 18 percent, and delays by 90 percent.” With the heightened security that followed the September 11, 2001, terrorist attacks, ground delays have exacerbated the problem. The obvious way to reduce delays is to expand airport capacity, but expansion has encountered determined public opposition and daunting costs. The time is right to take a fresh look at maglev.

If fully exploited, Maglev will provide speed, frequency, and reliability unlike any extant transportation mode.

High-speed trains that travel faster than 150 mph have demonstrated their appeal in Europe and Asia. Although Amtrak has had some success with trains that go as fast as 125 mph on the Washington, D.C., to New York line, the United States has yet to build a true high-speed rail line. But interest is growing among transportation planners. Roughly half the states are currently developing plans for regional high-speed rail corridors. Pending congressional legislation would authorize $10 billion in bonds over 10 years to finance high-speed rail projects in partnerships with the states. However, due to the severe funding limitations, most of these projects are likely to pursue only incremental improvements in existing rail lines. Experience in Europe and Japan suggests that higher speeds are needed to lure passengers from planes and to attract new travelers.

Even though–or perhaps because–the Europeans and Japanese already have high-speed rail lines, they have been aggressively developing maglev systems. The Japanese built a new 12-mile maglev test track just west of Tokyo and achieved a maximum speed of 350 mph. They plan to extend the test track and make it part of a commercial line between Tokyo and Osaka. The German government approved the Transrapid System of maglev technology for development in the early 1990s and has been actively marketing the system for export. It recently announced funding of $2 billion to build a 50-mile route between Dusseldorf and Dortmund and a 20-mile connector linking Munich to its airport. Meanwhile, the Swiss have been developing a new approach for their Swiss Metro System, involving high-speed maglev vehicles moving in partially evacuated tunnels. China is building a maglev system to connect Shanghai with Pudong International Airport. This system should be in demonstration operation in 2003 and in revenue operation early in 2004.

The United States has also exhibited interest, but its progress has been slower. In 1990, the United States began a multiagency National Maglev Initiative that began with a feasibility analysis and was eventually to evolve into a development program. Although the initial analysis was promising, the effort was terminated in 1993 before any significant hardware development began. After a five-year hiatus, Congress passed the Transportation Equity Act for the 21st Century, which included a program to demonstrate a 40-mile maglev rail line, which could later be lengthened. Selection of a test site will be announced soon.

Maglev makes the most economic sense where there is already strong demand and where the cost of meeting this demand through expansion of existing infrastructure is expensive. Airports offer an appealing target. Current capital improvement projects at 20 major airports have a combined cost of $85 billion, enough to build 2,460 miles of maglev guideway at $35 million per double-track mile. This would be sufficient to connect maglev lines to airports in several parts of the country.

Maglev must also be compared with conventional high-speed rail. Maglev and high-speed rail costs are roughly equivalent for elevated guideways, the type of system most likely to be built. The added technology cost of maglev systems tends to be balanced by the fact that maglev vehicles weigh about one-half to one-third as much per seat as high-speed passenger trains, resulting in competitive construction costs. And because there is no physical contact between the train and the guideway in a maglev system, operation and maintenance costs are estimated to be between 20 and 50 percent less than what is required for high-speed rail systems. Maglev also has other advantages over rail systems: It takes up less space and has greater grade-climbing and turning capabilities, which permit greater flexibility in route selection; its superior speed and acceleration make it possible for fewer trains to serve the same number of people; and the greater speed will undoubtedly attract more passengers.

Lessons learned

After looking at the progress of the technology, the history of U.S. government involvement in transportation infrastructure, and the experience of other countries that have begun maglev development, we arrived at the following key conclusions:

Performance: Speed counts. Ridership on ground transportation systems increases markedly with speeds that enable trips to be completed in times that are competitive with airline travel. Amtrak’s incremental improvements don’t cut it.

Economics: Maglev is cost-competitive with high-speed rail, yet provides greater speed, more flexibility, and the capability to integrate with airline/airport operations. The physical connection to the airport is a necessary first step, but the benefits of maglev will not be realized until the next step is taken: integrating maglev with airline operations.

Government role: If maglev is to be a part of the solution to airport congestion, the advocate agency should be the FAA or the Federal Highway Administration, since maglev would primarily be accommodating air and highway travelers.

Public-private partnership: Private industry has long been a willing partner in development and deployment, but the federal government needs to demonstrate a long-term commitment if the private sector is expected to participate. In 1997, the Maglev Study Advisory Committee was congressionally mandated to evaluate near-term applications of maglev technology in the United States. The committee made the following recommendations for government action: a federal commitment to develop maglev networks for passenger and freight transportation, with the government as infrastructure provider and the private sector as operator; federal support for two or three demonstration projects; and federal or state funding for guideways and private financing for the vehicles, stations, and operating and maintenance costs.

Benefits of early deployment: The United States needs to have one or two operating systems to convince the nation that the technology is practical and to identify areas for improvement, such as new electronic components and magnetic materials, new aircraftlike lightweight vehicle body designs, new manufacturing and installation methods, and innovative civil construction techniques and materials.

Research: The nation needs long-term federal support for transportation system planning and R&D activities. In addition, since it is impractical to conduct R&D activities on a commercial line, it will be necessary to design a national test facility where innovations that affect system cost and performance can be fully evaluated under carefully controlled and safe conditions. This is no different from the research approach that other transportation modes have developed.

Fresh thinking: Maglev may best be thought of as an entirely new mode of transportation. It is neither a low-flying airplane nor a very fast locomotive-drawn train. It has many attributes that, if fully exploited, will provide speed, frequency, and reliability unlike any extant mode. It will add mobility even in adverse weather conditions and without the adverse effects of added noise and air pollution and increased dependence on foreign oil. If integrated with airline operations, it will augment rather than compete with the airlines for intercity travelers and will decrease the need for further highway expansion. It can be incorporated into local transit systems to improve intracity mobility and access to airports.

The future of high-speed ground transportation in the United States can be a bright one. If implemented appropriately, maglev presents the opportunity to break the frustrating cycle in which modest infrastructure improvements produce only a minimal ridership increase that results in disappointing financial performance and a call for additional incremental funding. Successful implementation of just one U.S. maglev project should open the door to an alternative to the cycle of frustration. Government should be an active partner in this process.

The Continued Danger of Overfishing

New studies continue to chronicle how overfishing and poor management have severely hurt the U.S. commercial fishing industry. Thus, it makes sense to examine the effectiveness of the Sustainable Fisheries Act of 1996, which overhauled federal legislation guiding fisheries management. At the time, I predicted that, if properly implemented, the act would do much to bolster recovery and sustainable management of the nation’s fisheries. Today, I see some encouraging signs but still overall a mixed picture.

The 1996 legislation amended the Fisheries Conservation and Management Act of two decades earlier. The original law had claimed waters within 200 miles of the coast of the United States and its possessions (equivalent to some two-thirds of the U.S. continental landmass) as an “exclusive economic zone.” In so doing, it set the stage for eliminating the foreign fishing that had devastated commercially important fish and other marine life populations. Although it set up a complicated management scheme involving regional councils, the original legislation failed to direct fishery managers to prohibit overfishing or to rebuild depleted fish populations. Nor did it do anything to protect habitat for fishery resources or to reduce bycatch of nontarget species. Under purely U.S. control, many fish and shellfish populations sank to record low levels.

The only sensible course is to move forward: to eliminate overfishing, reduce bycatch, and protect and improve habitat.

The 1996 act addressed many of those management problems, especially the ones connected with overfishing and rebuilding. In the previous reauthorization of the earlier act, for example, the goal of “optimum yield” had been defined as “the maximum sustainable yield from the fishery, as modified by any relevant social, economic, or ecological factor.” A tendency of fishery managers to act on short-term economic considerations had often led to modifications upward, resulting in catch goals that exceeded sustainable levels and hence in overfishing, depletion, and the loss of economic viability in numerous fisheries.

The Sustainable Fisheries Act changed the word “modified” to “reduced.” In other words, fishery managers may no longer allow catches exceeding sustainable yields. Other new language defined a mandatory recovery process and created a list of overfished species. When a fish stock was listed as overfished, managers were given a time limit to enact a recovery plan. Because undersized fish and nontarget species caught incidentally and discarded dead account for about a quarter of the total catch, the law enabled fishery managers to require bycatch-reduction devices.

Although I had high hopes for the act when it was passed, its actual implementation, which began only in 1998, has been less than uniform. Fishery groups have sued to slow or block recovery plans, because the first step in those plans is usually to restrict fishing. Meanwhile, conservation groups have sued to spur implementation.

In that contentious climate, progress has been somewhat halting. On the one hand, overfishing continues for some species, and many fish populations remain depleted. One of the most commercially important fish–Atlantic cod–has yet to show strong increases despite tighter fishing restrictions.

On the other hand, in cases in which recovery plans have actually been produced, fish populations have done well. For example, New England has some of the most depleted stocks in U.S. waters. But remedies that in some cases began even before the law was reformed–closures of important breeding areas, regulation of net size, and reductions in fishing pressure–have resulted in encouraging upswings in the numbers of some overfished species. Not least among the rebounding species are scallops, yellowtail flounder, and haddock. Goals have been met for rebuilding sea scallops on Georges Bank and waters off the mid-Atlantic states. There has even been a sudden increase in juvenile abundance of notoriously overfished Atlantic swordfish. That is because federal managers, responding to consumer pressure and to lawsuits from conservation groups, closed swordfish nursery areas where bycatch of undersized fish had been high and cut swordfishing quotas. Some other overfished species, among them Atlantic summer flounder, certain mackerel off the Southeast, red snapper in the Gulf of Mexico, and tanner and snow crabs off Alaska, are rebounding nicely.

The trend in recovery efforts is generally upward. The number of fish populations with sustainable catch rates and healthy numbers has been increasing, and the number that are overfished declining. And rebuilding programs are now finally in place or being developed for nearly all overfished species.

Maintaining healthy fish populations is not just good for the ocean, of course, but also for commerce: Fish are worth money. Ocean fishing contributes $50 billion to the U.S. gross domestic product annually, according to the National Oceanic and Atmospheric Administration. But because fish are worth money only after they are caught, not everyone is pleased with aggressive efforts to ensure that there will be more fish tomorrow. Some people want more fish today. Restrictions designed to rebuild depleted stocks are costing them money in the short term.

For that reason, various amendments have been introduced in Congress that would weaken the gains of the Sustainable Fisheries Act and jeopardize fisheries. In particular, industry interests have sought to lengthen recovery times. Currently, the law requires plans for rebuilding most fish populations within a decade, with exceptions for slow-growing species. (Many fish could recover twice as fast if fishing was severely limited, but a decade was deemed a reasonable amount of time: It is practical biologically, meaningful within the working lifetime of individual fishers, and yet rapid enough to allow trends to be perceived and adjustments made if necessary.) Longer rebuilding schedules make it harder to assess whether a fish population is growing or shrinking in response to management efforts. The danger is that overfishing will continue in the short term, leading to tighter restrictions and greater hardship later on.

Recovered fish populations would contribute substantially to the U.S. economy and to the welfare of fishing communities. In just five years since the Sustainable Fisheries Act went into effect, the outlook for U.S. fisheries has improved noticeably, for the first time in decades. The only sensible course is to move forward: to eliminate overfishing, reduce bycatch, and protect and improve habitat. It would be foolish to move backward and allow hard-gotten gains to unravel just when they are gaining traction. Yet the debate continues.

Alternative routes

The Sustainable Fisheries Act is not the only defense against overfishing. There are two promising alternatives: marine protected areas and consumer seafood-awareness campaigns. Although traditional fishery management regulations have led to the closure of some areas to certain types of fishing gear, conservation groups in the past five years have pushed for a complete prohibition on fishing in certain spawning or nursery areas. They argue that fishing methods such as dragging bottom-trawl nets are inherently destructive to seafloor habitats and that vulnerable structures such as coral reefs need to be left alone to regenerate healthy marine communities.

On one tenet of that approach, the science is clear: Fish do grow larger and more abundant in areas where there is no fishing, and larger fish produce disproportionately more offspring than smaller fish. A single 10-year-old red snapper, for example, lays as many eggs as 212 two-year-old red snappers.

But on another score–the idea that fishing improves outside protected areas as a result of “spillover”–the evidence is less conclusive. Studies in different countries have produced contradictory results. Only a fraction of one percent of U.S. waters have been designated no-take reserves, and not enough time has passed to show whether or how much people fishing outside reserve boundaries will benefit. New studies specifically designed to answer that question are now being conducted.

Recreational fishing groups have generally fought attempts to put areas off limits. Their opposition has resulted in the introduction of a bill in Congress called the Freedom to Fish Act, which has ardent supporters. Recently, though, conservation and recreational fishing groups have begun a new dialogue to explain their respective positions on the science and the sensitivities of closing marine areas to fishing. I predict that the outcome will be a “zoning plan” that specifies what kinds of fishing should be allowed where, guaranteeing access to certain areas in exchange for putting other areas off limits.

The other major conservation alternative is to promote best fishing practices by harnessing consumer purchasing power. One such market approach is ecolabeling, as in “dolphin-safe” tuna. The Marine Stewardship Council–founded originally as a partnership between the corporate giant Unilever and the World Wide Fund for Nature–is leading a global effort to encourage fishing establishments to apply for certification. Certified products receive a logo telling consumers that the product is from a sustainable fishery.

Another market approach is a campaign to raise public awareness through wallet cards, books, and Web sites that help consumers choose well-managed, sustainably caught seafood. That effort has been carried out mainly by conservation groups, often in partnership with aquariums and other institutions, and has been aided by prominent chefs. Some specific goals of these campaigns have been a swordfish recovery plan, effective protection of endangered sturgeon, and better policing against illegal catches of Chilean seabass.

Although results are mixed, a new awareness about seafood has developed among consumers. Boycotts of Atlantic swordfish, Beluga caviar, and Chilean seabass have spread, and some seafood sellers are beginning to market toward this more sensitized consumer niche. I predict that over the next few years, consumer education will become the largest area of growth and change in the toolbox of ocean conservation strategy.

Bolstering Support for Academic R&D

Funding for academic research from all sources grew quite satisfactorily in the 1980s, at about 5.6 percent per year in constant dollars. Yet when I examined the picture in 1991, the future looked dim. The United States was just emerging from a recession, federal deficits were projected as far into the future as we could see, and the country was struggling to regain international competitiveness in a number of industries. The incoming president of the American Association for the Advancement of Science had issued a gloomy report stating that the academic research community suffered from low morale as a result of inadequate support and dim prospects.

Although I did not agree with all of the report’s arguments, I did believe that the academic research community had to vastly improve its appeal to its traditional funding sources if it hoped to thrive. It would have to “persuade our political and industrial supporters that academic research contributes to practical applications and to the education of students in sufficient measure to warrant the level of support we seek–particularly now, when adjusting to finite resources is fast becoming society’s watchword.” The community needed to improve its advocacy in the federal, state, and industrial arenas, and it had to bolster the confidence of sponsors by using resources more effectively and efficiently.

The scene has indeed changed during the past decade. Advocacy at the federal level has improved. More than three dozen research universities now have Washington offices, charged with establishing relations with members of Congress and their staffs and with agency and program heads. The aim of those Washington advocates is to promote favorable budgets for academic research generally and then to steer money to their institutions. Their job is little different from that of their counterparts in other interest groups. Beyond that, other activities that were once uncommon in academe have come into play. Professional societies now regularly communicate their views to Congress on a variety of matters. These societies also band together on particular issues, giving their voices even greater strength. Perhaps more important, some societies have learned how to practice constituency politics, alerting members to contact their congressional representatives on important matters. The scientific community has learned that politicians listen most attentively to people who can vote for them.

At the state level, support for academic research is widely associated with regional economic development. A strong research university embedded in a supporting infrastructure–one that includes incubators, tech parks, sources of angel and venture capital, mentoring and networking structures, and tech-based industries–can be an important source of economic development. In past decades, nearly all research universities viewed large, established, technically based companies as the principal source of industrial support for academic research and, often, the principal beneficiary. Today, spinoff of entrepreneurial ventures from academic research (the strategy behind Silicon Valley and Route 128) has become widespread. Because most such startups are too small to be a primary source of support, states step in. Despite those developments, however, the larger established firms continue to be the principal source of direct industrial funding, and that source has been growing.

On the matter of improving the use of resources, I made two main recommendations. One was that research institutions should do a better job of utilizing capital assets such as buildings and equipment. Little has been accomplished on that front. The other recommendation was that each campus should be selective in the fields of research it pursued and should consider the type of local industry, nearby research institutions, and other synergistic resources when making its choices. Here there has been progress. Instead of every research university trying to be all-encompassing, many campuses have enlisted faculty, students, alumni, and administrators to decide on areas of emphasis. As they have grown in number, research universities have recognized the need to become more distinctive.

In a subsequent article, “The Business of Higher Education” (The Bridge, Spring 1994), I dealt more extensively and critically with the leadership and business side of academic institutions: budgeting and accounting systems, committee functions, management skills, organizational effectiveness, and so on. For example, I noted that the classic accounting system used by universities–“fund” accounting–has been an egregious saboteur of good management. Among other defects, it undermines the ability to align programs with strategic directions and choices. It also fails to distinguish adequately between operating and capital expenditures and makes it difficult to allocate decisions to the most appropriate spot in the organization.

Fortunately, in the mid-1990s private colleges and universities adopted a new financial reporting model promulgated by the Financial Accounting Standards Board. The new model requires a balance sheet, an operating statement, and a statement of cash flows, which results in a system more congenial to the strategic management of resources. Although the full benefit of these changes remains to be achieved, they are at least a step in the right direction.

But before we congratulate ourselves on solving the funding problems I discussed a decade ago, let us take a look at what has actually happened. The 1990s departed radically from what had been expected at their outset. Federal deficits vanished, the economy grew handsomely, and the stock market boomed. One might therefor think that the support of academic research should have grown faster than in the 1980s. But the reverse is true. From all sources, support for academic R&D grew 77 percent (in constant dollars) during the 1980s, but only 49 percent in the 1990s. Federal support grew 55 percent in the 1980s, 47 percent in the 1990s. Even the biomedical area, which captured at least half of all increases (from all sources) in the two decades, grew less rapidly in the 1990s (68 percent) than in the 1980s (89 percent).

Funding did, however, became slightly less volatile during the past decade. Although annual variations during both decades were quite marked, the wide swings of the 1980s, when increases ranged from 0.4 to 9.5 percent, narrowed to a range of 2.4 to 7.7 percent in the 1990s. That modulation was true of both total support and federal support. A similar trend held for total biomedical support, where annual increases ranged from 1.9 percent to 10.2 percent in the 1980s and from 2 percent to 9.5 percent in the 1990s.

If I had written the article today instead of 12 years ago, I do not believe my advice would have been much different. Although our advocacy has improved in all quarters, it obviously needs to become better still if we hope to achieve real gains in support for research. And although academe has taken steps toward the more effective use of resources, too many bad habits prevail.

Math Education at Risk

Two decades ago, the United States awoke to headlines declaring that it was “A Nation at Risk.” In dramatic language, the National Commission on Excellence in Education warned of a “rising tide of mediocrity” that, had it been “imposed by a foreign power,” might well have been interpreted as “an act of war.” Shortly thereafter, dismal results from a major international assessment of mathematics education confirmed the commission’s judgment. Analysts at that time described U.S. mathematics education as the product of an “underachieving curriculum.”

Alarmed by these unfavorable assessments, mathematicians and mathematics educators launched an energetic and coordinated campaign to move mathematics education out from underachievement. Their strategy: national standards for school mathematics–an unprecedented venture for the United States–coordinated with textbooks, tests, and teacher training. Science shortly followed suit in this campaign for standards, as did other subjects.

With one exception, none of the nation’s grand objectives for mathematics and science education has been met or even approached.

By 1990, the president and the state governors formally adopted six national goals for education, including this one: “By the year 2000, United States students will be the first in the world in mathematics and science achievement.” Subsequently, states established standards in core academic subjects and introduced tests aligned with these standards to measure the performance of students, teachers, and schools.

Yet today, the nation remains very much at risk in this area. Although newsmaking perils appear more immediate (viruses, terrorists, deficits, unemployment), underachievement in education remains the most certain long-term national threat. Despite brave rhetoric and countless projects, we have not vanquished educational mediocrity, especially not in mathematics and science. Judging by recent policy proposals, we have not even grasped the true character of the problem.

Solid effort, poor results

The nation may deserve an A for effort, or at least a B+. All states but one have established content standards in mathematics, and most have done so in science. The number of states requiring more than two years of high-school mathematics and science has doubled. Many more high-school students, including students in all racial and ethnic groups, now take advanced mathematics and science courses. International comparisons show that U.S. students receive at least as much instruction in mathematics and science as students in other nations, and spend about as much time on homework.

Notwithstanding these notable efforts, data from national and international assessments show that, with one exception, none of the nation’s grand objectives for mathematics and science education has been met or even approached.

  • Student performance has stagnated. The average mathematics performance of 17-year-olds on the National Assessment of Educational Progress (NAEP) is essentially the same now as it was in 1973. During the 1970s, performance declined slightly, then rose during the 1980s, but has remained essentially constant since then. Science performance on the NAEP during the past three decades has generally mirrored that of mathematics: decline followed by recovery and then stagnation.
  • Mathematics performance remains substandard. In 2000, only one in six 12th-grade students achieved the NAEP “proficient” level, and only 1 in 50 performed at the “advanced” level. That same year, 34 percent of all students enrolled in postsecondary mathematics department were in remedial courses, up from 28 percent in 1980.
  • The gap between low- and high-performing students is immense. In mathematics, the difference between the highest and lowest NAEP quartiles for 17-year-olds is approximately the same as the difference between the average scores for 17- and 9-year-olds–roughly equivalent to eight years of schooling.
  • Racial and ethnic gaps are persistent and large. In 2000, one in three Asian/Pacific Islanders in the 12th grade and one in five white 12th graders scored at the NAEP’s proficient level, but less than 1 in 25 Hispanic and black 12th graders scored at that level. Modest gains during the 1970s and 1980s narrowed longstanding gaps among racial and ethnic groups, but there is no evidence of any further narrowing since 1990. In fact, there is some evidence that the gap between whites and blacks in mathematics has widened.
  • Students in poverty perform poorly. Twelfth-grade students who are eligible for the national school lunch program perform on the NAEP at about the same level as 8th-grade students who are not in the school lunch program. Throughout school, low-income students are twice as likely as their higher-income peers to score below the “basic” level of achievement in mathematics.
  • U.S. students remain uncompetitive internationally. Repeated assessments reveal little improvement in the U.S. ranking among nations and a widening of the cross-national achievement gap as students progress through school. Even the most advanced U.S. students perform poorly compared with similarly advanced students in other countries. Confirming evidence could be seen (at least when the economy was flourishing) in urgent business support for the H1-B visa program, which allows U.S. companies to hire skilled foreign workers when no U.S. citizens have proper qualifications.

One important exception to this recital of failure is gender equity. After decades of underrepresentation, girls are now as likely as boys to take advanced mathematics classes and more likely to take biology and chemistry. They remain, however, less likely to take physics. More important, the differences in performance between boys and girls on most high-school mathematics and science examinations are no longer statistically significant.

College attendance has increased dramatically during the past 20 years, even among low-performing students. At the same time, failure rates on high-school exit tests that are aligned with new state standards have shocked parents and led to political revolts. More telling, the gap between high- and low-performing students within each grade remains particularly wide, posing a major challenge for new mandatory programs designed to hold all students accountable to the same set of high standards.

Dearth of remedies

Many diagnoses but few remedies have emerged. International comparisons suggest that mathematics and science curricula in the United States are excessively repetitive and slight important topics. Instruction in U.S. classrooms focuses on developing routine skills (often to prepare students for high-stakes tests) and offers few opportunities for students to engage in high-level mathematical thinking.

In the mid-1990s, a vociferous national argument erupted over how to respond to this new round of dismal tidings (dubbed the “Math Wars” by the media). Advocates of traditional curricula and pedagogy were pitted against people who argued that old methods had failed and that new approaches were needed for the computer age. This debate in mathematics education paralleled contemporaneous cultural divides over reading, core curricula, and traditional values.

The lack of demonstrable progress in improving educational performance in mathematics and other subjects has led some people to view the problem as inherently unsolvable within a system of public education. This view is often supported by statistics that appear to show little correlation between expenditures and achievement in education from kindergarten through 12th grade. Down this road lies the political quagmire of vouchers and school choice.

Other observers see the lack of progress more as an indicator of flawed strategies–of widespread underestimation of the depth of understanding and intensity of effort required to teach mathematics effectively. A lack of respect for the complexity of the problem encourages quick fixes (smaller classes, higher standards, more tests, higher teacher salaries) that do not yield greater disciplinary understanding or pedagogical skill.

A decade after the first President Bush said that the United States would be “first in the world,” Congress enacted the signature legislation of another President Bush, sadly entitled “No Child Left Behind.” Faced with overwhelming evidence of failure to meet the 1990 goal, this unprecedented legislation imposes the authority and financial muscle of the federal government across the entire landscape of K-12 education. The law mandates annual testing of students in the 3rd through 8th grades and in 11th grade, with reporting disaggregated by ethnic categories. Schools that do not demonstrate annual improvements in each category at each grade are subject to various sanctions, and students in these “failing” schools will be allowed to move to other schools.

Advocates of federally mandated tests argue that making progress requires measuring progress. Critics see classrooms turning into test-prep centers, where depth and cohesion are abandoned for isolated skills found on standardized tests. Totally absent from the current debate is the 1990s ideal of being “first in the world.” Chastened by experience, the nation’s new educational aspiration appears much more modest: Just avoid putting children at risk.

New Life for Nuclear Power

Most of what I wrote in “Engineering in an Age of Anxiety” and “Energy Policy in an Age of Uncertainty” I still believe: Inherently safe nuclear energy technologies will continue to evolve; total U.S. energy output will rise more slowly than it has hitherto; and incrementalism will, at least in the short run, dominate our energy supply. However, my perspective has changed in some ways as the result of an emerging development in electricity generation: the remarkable extension of the lifetimes of many generating facilities, particularly nuclear reactors. If this trend continues, it could significantly alter the long-term prospect for nuclear energy.

This trend toward nuclear reactor “immortality” has become apparent in the past 20 years, and it has become clear that the projected lifetime of a reactor is far longer than we had estimated when we licensed these reactors for 30 to 40 years. Some 14 U.S. reactors have been relicensed, 16 others have applied for relicensing, and 18 more applications are expected by 2004. According to former Nuclear Regulatory Commission Chairman Richard Meserve, essentially all 103 U.S. power reactors will be relicensed for at least another 20 years.

Making a significant contribution to CO2 control would require a roughly 10-fold increase in the world’s nuclear capacity.

If nuclear reactors receive normal maintenance, they will “never” wear out, and this will profoundly affect the economic performance of the reactors. Time annihilates capital costs. The economic Achilles’ heel of nuclear energy has been its high capital cost. In this respect, nuclear energy resembles renewable energy sources such as wind turbines, hydroelectric facilities, and photovoltaic cells, which have high capital costs but low operating expenses. If a reactor lasts beyond its amortization time, the burden of debt falls drastically. Indeed, according to one estimate, fully amortized nuclear reactors with total electricity production costs (operation and maintenance, fuel, and capital costs) below 2 cents per kilowatt hour are possible.

Electricity that inexpensive would make it economically feasible to power operations such as seawater desalinization, fulfilling a dream that was common in the early days of nuclear power. President Eisenhower proposed building nuclear-powered industrial complexes in the West Bank as a solution to the Middle East’s water problem, and Sen. Howard Baker promulgated a “sense of the U.S. Senate” resolution authorizing a study of such complexes as part of a settlement of the Israel-Palestinian conflict.

If power reactors are virtually immortal, we have in principle achieved nuclear electricity “too cheap to meter.” But there is a major catch. The very inexpensive electricity does not kick in until the reactor is fully amortized, which means that the generation that pays for the reactor is giving a gift of cheap electricity to the next generation. Because such altruism is not likely to drive investment, the task becomes to develop accounting or funding methods that will make it possible to build the generation capacity that will eventually be a virtually permanent part of society’s infrastructure.

If the only benefit of these reactors is to produce less expensive electricity and the market is the only force driving investment, then we will not see a massive investment in nuclear power. But if immortal reactors by their very nature serve purposes that fall outside of the market economy, their original capital cost can be handled in the way that society pays for infrastructure.

Such a purpose has emerged in recent years: the need to limit CO2 emissions to protect against climate change. To a remarkable degree, the incentive to go nuclear has shifted from meeting future energy demand to controlling CO2. At an extremely low price, electricity uses could expand to include activities such as electrolysis to produce hydrogen. If the purpose of building reactors is CO2 control rather than producing electricity, then the issue of going nuclear is no longer a matter of simple economics. Just as the Tennessee Valley Authority’s (TVA’s) system of dams is justified by the public good of flood control, the system of reactors would be justified by the public good of CO2 control. And just as TVA is underwritten by the government, the future expansion of nuclear energy could, at the very least, be financed by federally guaranteed loans. Larry Foulke, president of the American Nuclear Society, has proposed the creation of an Energy Independence Security Agency, which would underwrite the construction of nuclear reactors whose primary purpose is to control CO2.

Making a significant contribution to CO2 control would require a roughly 10-fold increase in the world’s nuclear capacity. Providing fissile material to fuel these thousands of reactors for an indefinite period would require the use of breeder reactors, a technology that is already available; or the extraction of uranium from seawater, a technology yet to be developed.

Is the vision of a worldwide system of as many as 4,000 reactors to be taken seriously? In 1944, Enrico Fermi himself warned that the future of nuclear energy depended on the public’s acceptance of an energy source encumbered by radioactivity and closely linked to the production of nuclear weapons. Aware of these concerns, the early advocates of nuclear power formulated the Acheson-Lilienthal plan, which called for rigorous control of all nuclear activities by the International Atomic Energy Agency (IAEA). But is this enough to make the public willing to accept 4,000 large reactors? Princeton University’s Harold Feiveson has already said that he would rather forego nuclear energy than accept the risk of nuclear weapons proliferation in a 4,000-reactor world.

I cannot concede that our ingenuity is unequal to living in a 4,000-reactor world. With thoughtful planning, we could manage the risks. I imagine having about 500 nuclear parks, each of which would have up to 10 reactors plus reprocessing facilities. The parks would be regulated and guarded by a much-strengthened IAEA.

What about the possibility of another Chernobyl? Certainly today’s reactors are safer than yesterday’s, but the possibility of an accident is real. Last year, alarming corrosion was found at Ohio’s Davis Besse plant, apparently the result of a breakdown in the management and operating practices at the plant. Chernobyl and Davis Besse illustrate the point of Fermi’s warning: Although nuclear energy has been a successful technology that now provides 20 percent of U.S. electricity, it is a demanding technology.

In addition to the risk of accidents, we face a growing possibility that nuclear material could fall into the hands of rogue states or terrorist groups and be used to create nuclear weapons. I disagree with Feiveson’s conclusion that this risk is too great to bear. I believe that we can provide adequate security for 500 nuclear parks.

Is all this the fantasy of an aging nuclear pioneer? Possibly so. In any case, I won’t be around to see how the 21st century deals with CO2 and nuclear energy. Nevertheless, this much seems clear: If we are to establish a proliferation-proof fleet of 500 nuclear parks, we will have to expand on the Acheson-Lillienthal plan in ways that will–as George Schultz observed in 1989–require all nations to relinquish some national sovereignty.

Whither the U.S. Climate Program?

Approximately 50 years ago, the first contemporary stirring within the scientific community about climate change began when Roger Revelle and Hans Suess wrote that “human beings are now carrying out a large-scale geophysical experiment.” Since that time, the scientific community has made remarkable progress in defining the effect that increased concentrations of greenhouse gases could have on the global climate and in estimating the nature and scale of the consequences. The political discussion about how to respond to this threat has been less successful.

Although a small vocal group of scientists continues to raise important questions about whether the data and the theory validate the projected trend in the climate, these views have been more than counteracted by the overwhelming consensus of scientists that the case for the projected climate change is solid. The 2001 assessment by the Intergovernmental Panel on Climate Change of the World Meteorological Organization projects that by the year 2100, there will be a global temperature increase of 1.4 to 5.8 degrees centigrade, a global sea level rise of 9 to 88 centimeters, and a significant increase in the number of intense precipitation events. The wide range of these estimates reflects differences in assumptions about population projections, technological developments, and economic trends that are used in constructing the scenarios.

By some time in the latter half of this century we will need to have in place a transformed energy system that has been largely “decarbonized.”

As the consensus on the likelihood of climate change became more robust, the world’s political leaders began to take notice. At a 1992 meeting in Rio De Janeiro, the world’s nations agreed to the United Nations’ Framework Convention on Climate Change (FCCC), which called for the “stabilization of the concentration of greenhouse gases at a level that would prevent dangerous climatic consequences.” Unfortunately, they were not able to specify what level of concentration would be acceptable or what constituted “dangerous” climate change. Instead, they established a Conference of Parties to work out the details and develop a plan of action to control climate change.

When the Conference of Parties convened in 1997 to produce the plan of action known as the Kyoto Protocol, the scientific and political differences among nations came into sharp focus. The U.S. Senate by unanimous vote declined to approve the plan because it excused many less developed countries such as India and Brazil from the requirement to curb greenhouse gas emissions and because the senators were afraid that meeting its requirement of reducing greenhouse gas emissions to 7 percent below the 1990 level by 2012 would have disastrous consequences for the U.S. economy.

A final political blow occurred in 2000 when President Bush withdrew the United States as a signatory to the Kyoto Protocol and announced that the country would meet its climate objectives by voluntary means. He called on the National Academies to review the science and recommend courses of action. The Academies confirmed that the threat of global climate warming was real but acknowledged that there were considerable uncertainties. The administration established a revised climate program overseen by a cabinet-level committee and assigned the responsibility for the climate science and technology programs to the Department of Commerce and the Department of Energy, respectively. It changed the U.S. goal from reducing the absolute amount of greenhouse gas emissions to reducing their intensity per unit of gross domestic product, which means that total emissions could increase in a growing economy.

The automobile has the potential to soften its impact on climate.

In spite of the political controversies surrounding the climate change issue, the U.S. research program is now barreling ahead. One goal is to define more precisely what concentration of greenhouse gases in the atmosphere would result in what the FCCC described as dangerous. Work will continue on trying to further refine the expected global temperature increase, sea level rise, and precipitation change. A central question now is what will happen at the local and regional level. The U.S. Global Change Research Program has already completed a preliminary assessment of the consequences of climate change for 16 regions of the country and four economic sectors. Further research is needed to confirm and refine these assessments.

Technology to the rescue

The climate-related field of research that needs the most attention now is energy technology. Most energy research has been done with the goal of reducing cost or improving efficiency. Only in recent years has the goal been to reduce carbon emissions and the atmospheric concentration of greenhouse gases. Much more needs to be done with that focus. By some time in the latter half of this century we will need to have in place a transformed energy system that has been largely “decarbonized.” British Prime Minister Tony Blair has declared that the United Kingdom will reduce its emissions of greenhouse gases by 60 percent by the year 2050. President Bush has not been so bold.

A variety of technologies deserve attention. The growing use of renewable energy systems that entail little or no use of carbon will play an important role. Photovoltaic cells are proving their value in remote niche applications, wind farms on land and in coastal waters are advancing rapidly in Europe and the United States, and biofuels are showing increasing promise. The next generation of nuclear power plants promises to be much safer and to produce much less hazardous waste, and nuclear fusion may eventually become practical. Another approach to reducing atmospheric carbon concentrations that is showing promise is carbon sequestration. Carbon can be sequestered in the terrestrial biosphere in trees, plants, and soils. And action to do so is contemplated. The technology to capture carbon emissions at the point of combustion and to store that carbon in geological formations under land and sea is becoming a reality. We are also moving forward with efforts to reduce emissions from vehicles. With the development of hybrid vehicles combining electric motors and petroleum, the automobile has the potential to soften its impact on climate in the near term. With the development of fuel cells that power vehicles with hydrogen, we can envision a future in which transportation is no longer a major source of greenhouse gas emissions.

Combining these technological advances with a continued assault by science on the outstanding problems could create the conditions necessary to meet the terms of the FCCC. However, caveats are in order. Climate issues are inherently international and require the participation of developing as well as developed countries. Although the scientific research can be conducted primarily by the developed countries, the implementation of new technologies must take place everywhere. Because many countries lack the resources to invest in new energy and transportation technology, this is an area where the United States could be a leader in providing financial assistance and demonstrating its willingness to work constructively with other nations for the benefit of all.

Biodiversity in the Information Age

My 1985 Issues article was among the first to document and assess the problem of biodiversity in the context of public policy. It was intended to bring the extinction crisis to the attention of environmental policymakers, whose focus theretofore had been almost entirely on pollution and other problems of the physical environment. Several factors contributed to this disproportion: Physical events are simpler than biological ones, they are easier to measure, and they are more transparently relevant to human health. No senator’s spouse, it had been said, ever died of a species extinction.

The mid-1980s saw a steep increase in awareness concerning the living environment. In 1986, the National Academy of Sciences and the Smithsonian Institution cosponsored a major conference on biodiversity, assembling for the first time the scores of specialists representing the wide range of disciplines, from systematics and ecology to agriculture and forestry, that needed to merge their expertise in basic and applied research to address the critical questions. The papers were published in the book BioDiversity, which became an international scientific bestseller. The term biodiversity soon became a household word. By 1992, when I published The Diversity of Life, the scientific and popular literature on the subject had grown enormously. The Society of Conservation Biology emerged as one of the fastest-growing of all scientific societies. Membership in organizations dedicated to preserving biodiversity grew manyfold. Now there are a dozen new journals and shelves of technical and popular books on the topic.

Developing countries are in desperate need of advanced scientific institutions that can engage the energies of their brightest young people and encourage political leaders to create national science programs.

The past decade has witnessed the emergence of a much clearer picture of the magnitude of the biodiversity problem. Put simply, the biosphere has proved to be more diverse than was earlier supposed, especially in the case of small invertebrates and microorganisms. An entire domain of life, the Archaea, has been distinguished from the bacteria, and a huge, still mostly unknown and energetically independent biome–the subterranean lithoautotrophic microbial ecosystems–has been found to extend three kilometers or more below the surface of Earth.

In the midst of this exuberance of life forms, however, the rate of species extinction is rising, chiefly through habitat destruction. Most serious of all is the conversion of tropical rainforests, where most species of animals and plants live. The rate has been estimated, by two independent methods, to fall between 100 and 10,000 times the prehuman background rate, with 1,000 times being the most widely accepted figure. The price ultimately to be paid for this cataclysm is beyond measure in foregone scientific knowledge; new pharmaceutical and other products; ecosystems services such as water purification and soil renewal; and, not least, aesthetic and spiritual benefits.

Concerned citizens and scientists have begun to take action. A wide range of solutions is being proposed to stanch the hemorrhaging of biodiversity at the regional as well as the global level. Since 1985, the effort has become more precisely charted, economically efficient, and politically sensitive.

The increasing attention given to the biodiversity crisis highlights the inadequacy of biodiversity research itself. As I stressed in 1985, Earth remains in this respect a relatively unexplored planet. The total number of described and formally named species of organisms (plant, animal, and microbial) has grown, but not by much, and today is generally believed to lie somewhere between 1.5 million and 1.8 million. The full number, including species yet to be discovered, has been estimated in various accounts that differ according to assumptions and methods from an improbably low 3.5 million to an improbably high 100 million. By far the greatest fraction of the unknown species will be insects and microorganisms.

Since the current hierarchical, binomial classification was introduced by Carolus Linnaeus 250 years ago, 10 percent, at a guess, of the species of organisms have been described. Many systematists believe that most and perhaps nearly all of the remaining 90 percent can be discovered, diagnosed, and named in as little as one-10th that time–about 25 years. That potential is the result of two developments needed to accelerate biodiversity studies. The first is information technology: It is now possible to obtain high-resolution digitized images of specimens, including the smallest of invertebrates, that are better than can be perceived through conventional dissecting microscopes. Type specimens, sequestered in museums scattered around the world and thus unavailable except by mail or visits to the repositories, can now be photographed and made instantly available everywhere as “e-types” on the Internet. Recently, the New York Botanical Garden made available the images of almost all its types of 90,000 species. In a parallel effort, Harvard’s Museum of Comparative Zoology has laid plans to publish e-types of its many thousands of insect species. As the total world collection of primary type specimens is brought online, covering most or all of perhaps one million species that can be imaged in sufficient detail to be illustrated in this manner, the rate of taxonomic reviews of named species and the discovery of new ones can be accelerated 10 times or more over that of predigital taxonomy.

The second revolution about to catapult biodiversity studies forward is genomics. With base pair sequencing automated and growing ever faster and less expensive, it will soon be possible to describe bacterial and archaean species by partial DNA sequences and to subsequently identify them by genetic bar-coding. As genomic research proceeds as a broader scientific enterprise, microorganism taxonomy will follow close behind.

The new biodiversity studies will lead logically to an electronic encyclopedia of life designed to organize and make immediately available everything known about each of the millions of species. Its composition will be led for a time by the industrialized countries. However, the bulk of the work must eventually be done in the developing countries. The latter contain most of the world’s species, and they are destined to benefit soonest from the research. Developing countries are in desperate need of advanced scientific institutions that can engage the energies of their brightest young people and encourage political leaders to create national science programs. The technology needed is relatively inexpensive, and its transfer can be accomplished quickly. The discoveries generated can be applied directly to meet the concerns of greatest importance to the geographic region in which the research is conducted, being equally relevant to agriculture, medicine, and economic growth.

Information Technology and the University

A decade ago, many people had yet to accept that the inexorable progress of information technology (IT) would result in fundamental change in universities. Experience is shrinking that group. The basic premises that underlie the need for change are the same today as they were then, but are even more compelling:

The modern research university provides a range of functions that are incredibly important to our society, all of which are highly information-intensive.

IT will continue to become faster and cheaper at an exponential pace for the foreseeable future, enabling alternatives to the ways that universities have traditionally fulfilled their various functions, and possibly even to the university as provider of those functions.

It would be naïve to assume that, unlike other businesses, the availability of these alternatives will not transform both the roles and character of the university.

Precisely because of the importance of the functions provided by the research university, it behooves us to explore deeply and critically what sorts of changes might occur so that, if they do occur, we are better prepared for them.

The capacity to reproduce with high fidelity all aspects of human interactions at a distance could well eliminate the classroom and perhaps even the campus.

It’s hard for those of us who have spent much of our lives as academics to look inward at the university, with its traditions and obvious social value, and accept the possibility that it might change in dramatic ways. But although its roots are millennia old, the university has changed before. In the 17th and 18th centuries, scholasticism slowly gave way to the scientific method as the way of knowing truth. In the early 19th century, universities embraced the notion of secular, liberal education and began to include scholarship and advanced degrees as integral parts of their mission. After World War II, they accepted an implied responsibility for national security, economic prosperity, and public health in return for federally funded research. Although the effects of these changes have been assimilated and now seem natural, at the time they involved profound reassessment of the mission and structure of the university as an institution.

Today, the university has entered yet another period of change driven by powerful social, economic, and technological forces. To better understand the implications for the research university, in February 2000 the National Academies convened a steering committee that, through a series of meetings and a workshop, produced the report Preparing for the Revolution (National Academies Press, 2002). Subsequently, the Academies have created a roundtable process to encourage a dialogue among university leaders and other stakeholders, and in April 2003 held the first such dialogue with university presidents and chancellors.

The first finding of the Academies’ steering committee was that the extraordinary pace of the IT evolution is not only likely to continue but could well accelerate. One of the hardest things for most people to understand is the compound effect of this exponential rate of improvement. For the past four decades, the speed and storage capacity of computers have doubled every 18 to 24 months; the cost, size, and power consumption have become smaller at about the same rate. As a result, today’s typical desktop computer has more computing power and storage than all the computers in the world combined in 1970. In thinking about changes in the university, one must think about the technology that will be available in 10 or 20 years; technology that will be thousands of times more powerful as well as thousands of times cheaper.

The second finding of the committee, in the words of North Carolina State University Chancellor Mary Anne Fox, was that the impact of IT on the university is likely to be “profound, rapid, and discontinuous,” affecting all of its activities (teaching, research, and service), its organization (academic structure, faculty culture, financing, and management), and the broader higher education enterprise as it evolves toward a global knowledge and learning industry. If change is gradual, there will be time to adapt gracefully, but that is not the history of disruptive technologies. As Clayton Christensen explains in The Innovator’s Dilemma, new technologies are at first inadequate to displace existing technology in existing applications, but they later explosively displace those applications as they enable new ways of satisfying the underlying need.

Although it may be difficult to imagine today’s digital technology replacing human teachers, as the power of this technology continues to evolve 100- to 1000-fold each decade, the capacity to reproduce with high fidelity all aspects of human interactions at a distance could well eliminate the classroom and perhaps even the campus as the location of learning. Access to the accumulated knowledge of our civilization through digital libraries and networks, not to mention massive repositories of scientific data from remote instruments such as astronomical observatories or high-energy physics accelerators, is changing the nature of scholarship and collaboration in very fundamental ways. Each new generation of supercomputers extends our capacity to simulate physical reality at a higher level of accuracy, from global climate change to biological functions at the molecular level.

The third finding of the committee suggests that although IT will present many complex challenges and opportunities to universities, procrastination and inaction are the most dangerous courses to follow during a time of rapid technological change. After all, attempting to cling to the status quo is a decision in itself, perhaps of momentous consequence. To be sure, there are certain ancient values and traditions of the university, such as academic freedom, a rational spirit of inquiry, and liberal learning that should be maintained and protected. But just as it has in earlier times, the university will have to transform itself once again to serve a radically changing world if it is to sustain these important values and roles.

After the publication of Preparing for the Revolution, the Academies formed a standing roundtable to facilitate discussion among stakeholders. Earlier this spring, the roundtable had the opportunity to discuss these findings in a workshop with two dozen presidents and chancellors of major research universities. The conversation began with several presidents reviewing contemporary issues such as how universities can finance the acquisition and maintenance of digital technology and how they can manage the use of this technology to protect security, privacy, and integrity–issues that presidents all too often delegate to others. However, as the workshop progressed further to consider the rapid evolution of digital technology, the presidents began to realize just how unpredictable the future of their institutions had become. As University of California-Berkeley Chancellor Robert Berdahl observed, presidents have very little experience with providing strategic visions and leadership for futures driven by such disruptive technologies.

Addressing this concern, Louis Gerstner, retired CEO of IBM, shared with the presidents some of his own observations concerning leadership during a period of rapid change. The IBM experience demonstrated the dangers of resting on past successes. Instead, leaders need to view IT as a powerful tool capable of driving a process of strategic change, but only with the full attention and engagement of the chief executive.

These early efforts of the National Academies suggest that during the coming decade, the university as a physical place, a community of scholars, and a center of culture will remain much as it is today. IT will be used to augment and enrich the traditional activities of the university without transforming them. To be sure, the current arrangements of higher education may shift. For example, the new knowledge media will enable us to build and sustain new types of learning communities, free from the constraints of space and time, which may create powerful new market forces. But university leadership should not simply react to threats but instead act positively and strategically to exploit the opportunities presented by IT. As Gerstner suggested, this technology will provide great opportunities to improve the quality of our activities. It will allow colleges and universities to serve society in new ways, perhaps more closely aligned with their fundamental academic mission and values.

Looking forward two or more decades, the future of the university becomes far less certain. Although the digital age will provide a wealth of opportunities for the future, one must take great care not simply to extrapolate the past but to examine the full range of possibilities for the future. There is clearly a need to explore new forms of learning and learning institutions that are capable of sensing and understanding the change and of engaging in the strategic processes necessary to adapt or control it. In this regard, IT should be viewed as a tool of immense power to use in enhancing the fundamental roles and missions of the university as it enters the digital age.

Winning Greater Influence for Science

In this space 20 years ago, I reported on the unwritten social contract between scientists and society: an unspoken agreement that gives science a “creative separateness from involvement with goals, values, and institutions other than its own.” My conclusion then was that “To an impressive extent, [science’s] … insistence on autonomy has worked brilliantly,” although the contract came with a huge, though hidden, price tag.

This “social contract” has allowed science to pursue long-term fundamental questions and to build slowly on the basis of its new knowledge. Science has been able to do this even in the context of a society such as ours, which in most domains is impatient, excessively pragmatic, and thinks only in the short term. But this same social contract is responsible for the widening disparity between the sophistication of our science and the relatively primitive state of our social and political relationships.

In today’s public domain, scientists are highly respected but not nearly as influential as they should be.

Now, 20 years later, both the successes and the price tag of this social contract have grown. Science has reached greater heights of sophistication and productivity, while the gap between science and public life has grown ever larger and more dangerous, to an extent that now poses a serious threat to our future. We need to understand the causes of the divide between science and society and to explore ways of narrowing the gap so that the voice of science can exert a more direct and constructive influence on the policy decisions that shape our future.

The great divide

In today’s public domain, scientists are highly respected but not nearly as influential as they should be. In the arena of public policy, their voices are mostly marginalized. They do not have the influence due to them by virtue of the importance and relevance of their work and of the promises and dangers it poses for our communal life.

Among the many reasons for science’s lagging influence, the major one is difficult to engage directly, because it is so elusive. The unfortunate reality is that scientists and the rest of society operate out of vastly different worldviews, especially in relation to assumptions about what constitutes knowledge and how to deal with it. Scientists share a worldview that presupposes rationality, lawfulness, and orderliness. They believe that answers to most empirical problems are ultimately obtainable if one poses the right questions and approaches them scientifically. They are comfortable with measurement and quantification, and they take the long view. They believe in sharing information, and their orientation is internationalist because they know that discoveries transcend borders.

The nonscientific world of everyday life in the United States marches to a different drummer. Public life is shot through and through with irrationality, discontinuity, and disorder. Decisionmakers rarely have the luxury of waiting for verifiable answers to their questions, and when they do, almost never go to the trouble and cost of developing them. Average Americans are uncomfortable with probabilities, especially in relation to risk assessment, and their time horizon is short. Policymakers are apprehensive about sharing information and are more at home with national interests than with internationalism. Most problems are experienced with an urgency and immediacy that make people impatient for answers; policymakers must deal with issues as they arise and not in terms of their accessibility to rational methods of solution.

Scientists need techniques for framing policy options that give the proper weight to their scientific content in relation to nonscientific variables and political realities.

This profound difference in worldview manifests itself in a many forms, some superficial, some moderately serious, and some that cry out for urgent attention. Here are three relatively superficial symptoms of the divide:

Semantic misunderstandings about the word “theory.” To the public, calling something a “theory” means that it is not supported by tested, proven evidence. Whereas a scientist understands a theory to be a well-grounded explanation for a given phenomenon, the general public understands it as “just a theory,” no more valid than any other opinion on the matter. (Evolutionary “theory” and creationist “theory” are, in this sense, both seen as untested and unproven “theories” and therefore enjoy equivalent truth value.)

Media insistence on presenting “both sides.” When this confusion over “theory” bumps up against media imperatives, the result is often a distorting effort to tell “both sides” of the story. In practice, this means that even when there is overwhelming consensus in the scientific community (as in the case of global warming), experts all too often find themselves pitted in the media against some contrarian, crank, or shill who is on hand to provide “proper balance” (and verbal fireworks). The resulting arguments actively hinder people’s ability to reach sound understanding: not only do they muddy the public’s already shaky grasp of scientific fundamentals, they leave people confused and disoriented.

Science’s assumption that scientific illiteracy is the major obstacle. When faced with the gap between science and society, scientists assume that the solution is to make the public more science-literate—to do a better job at science education and so bring nonscientists around to a more scientific mindset. This assumption conveniently absolves science of the need to examine the way in which its own practices contribute to the gap and allows science to maintain its position of intellectual and moral superiority. In addition, on a purely practical level a superficial smattering of scientific knowledge might cause more problems than it solves.

The craving for certainty about risk and threat. The public and policymakers crave a level of certainty that the language and metrics of science cannot provide. For example, when the public is alarmed by something like the anthrax scare or some future act of small-scale biological or chemical terrorism, science will assess the threat in the language of probabilities. But this metric neither reassures the public nor permits it to make realistic comparisons to other threats, such as nuclear terrorism. Science’s frame of reference does not communicate well to the public.

Divergent timetables. The timetables of science (which operates in a framework of decades or longer) are completely out of synch with the timetables of public policy (which operates in a framework of months and years). It has taken nearly 30 years for the National Academy of Sciences to complete its study of the consequences of oil drilling in Alaska’s North Slope; in that time, a great deal of environmental damage has been done, and political pressure for further exploration in the Arctic National Wildlife Refuge has gained momentum. At this stage, the academy’s scientific report stands to become little more than a political football. Vaccine research is another example: political demands for prompt action on high-profile diseases do not jibe well with the painstaking process of research and trial. Political pressures push resources toward popular or expedient solutions, not necessarily those with the greatest chance for long-term success.

Two more manifestations of the divide are particularly troublesome:

The accelerating requirement that knowledge be “scientific.” In both the academic community and Congress, the assumption is growing that only knowledge verified by scientific means (such as random assignment experiments) can be considered “real knowledge.” Unfortunately, only a minuscule number of policy decisions can ever hope to be based on verified scientific knowledge. Most public policy decisions must rely on ways of knowing—including judgment, insight, experience, history, scholarship, and analogies—that do not meet the gold standard of scientific verification. Our society lacks a clear understanding of the strengths and limitations of nonscientific ways of knowing, how to discriminate among them, and how they are best used in conjunction with scientific knowledge. Since the time of the ancient Greeks, our civilization has presupposed a hierarchy of knowledge, but never before have forms of nonscientific knowledge been so problematic and devalued, even though they remain the mainstay of policy and of everyday life.

Colliding political and scientific realities. Although the scientific framework demands that scientists maintain objectivity and neutrality, political leaders pressure scientists to produce the “correct” answers from a political point of view. When political and scientific imperatives collide, science is usually the loser. President Reagan’s science advisors on antiballistic missile systems found themselves marginalized when they didn’t produce the answers the administration wanted. Scientists do not have a lot of experience in dealing with political pressures in a way that permits them to maintain both their integrity and their influence. Arguably, this has been the greatest single factor in science’s declining influence in policy decisions.

Nor are these the only symptoms. A host of other elements exacerbate the divide between the two worlds: unresolved collisions with religious beliefs, difficulty in assessing the relative importance of threats, the growing number and complexity of issues, and the wide array of cultural and political differences in society. The overall result is a dangerous exclusion of the scientific viewpoint from political and economic decisionmaking at the very time when that viewpoint is most urgently needed.

Bridging the divide

These are impressive hurdles, but they can be surmounted if the will to do so is strong enough and if we follow two core principles. The first is that the initiative to bridge the gap must come primarily from the scientists’ side rather than the policymakers’ side, for reasons both of motivation and of substance. Scientists have the stronger motivation to take the initiative, because they know how important their input can be, and they are aware of the dangers posed by their loss of influence. In addition, scientists have an advantage regarding the substance of policy when scientific matters are at issue, because their mastery of the material makes it easier for them to integrate scientific and nonscientific content than for policymakers to do so. After all, as concerned citizens, scientists live in the policy world along with everyone else. But most policymakers are far removed from the world of science.

The second core principle is that scientists’ efforts must not be confined to engaging policy elites but must extend to the general public as well. In our democracy, no major initiative can succeed without broad public support, which can be especially challenging to garner for proposals that require sacrifice or changes in lifestyle. Further, in many policy arenas the only way to offset special interest lobbying is to mobilize the public against it.

I see two strategies for bridging the divide: one for repairing lost influence at the top of the policy hierarchy and the other for engaging the public on important science-laden issues.

To regain influence at the top, reposition and reframe the science advisory function, shifting from the narrow role of science specialist to the broader role of framer of policy options. In recent years, scientists have increasingly been relegated to the “specialist” sidelines. There was a time, however, when scientists’ voices were heard and heeded at the highest levels of policymaking. When the Massachusetts Institute of Technology’s James Killian served as President Eisenhower’s science advisor in the 1950s, the two men forged a close and mutually respectful working relationship. This proved to be the pinnacle of science’s influence in US policymaking circles.

In many ways, Killian’s advisory role to Ike resembled McGeorge Bundy’s advisory role as John F. Kennedy’s national security advisor and “options czar” more than it resembled that of any current presidential science advisors. Instead of offering a specialist’s perspective, Bundy exercised his influence by controlling the process of framing and presenting policy options for presidential action. Bundy was scrupulous about doing justice to policy options with which he disagreed, but he was also able to make the strongest possible case for his own point of view. Kennedy knew that Bundy was a strong-minded and determined player, not just a technical advisor; at the same time he also trusted Bundy to make a fair case for options other than his own.

In contrast, today’s scientists, operating mainly in specialist mode, do not address the key question troubling political leaders: for all policy options under consideration, what relative weight to give to the various specialists’ perspectives and how to balance them against political considerations. The more technical the scientific input, the less its relevance to policymakers’ most basic concerns.

The role of technical advisor also places scientists in a political bind by forcing them to resort to advocacy or frustrated silence when they strongly disagree with policy decisions, thus risking their professional integrity and their political credibility. (Issues readers will be able to supply all too many examples.)

To break out of the specialist box, scientists need techniques for framing policy options that give the proper weight to their scientific content in relation to nonscientific variables and political realities. Such techniques of policy option presentation not only frame the technical aspects of an issue but also develop a range of scenarios of likely consequences that take into account relevant nonscientific perspectives: social, economic, cultural, geopolitical, military, etc. By drawing on these various perspectives, fully acknowledging the merits of each and framing policy options accordingly, the policy option presentation technique gives scientists a way to upgrade their role while also performing the specialists’ function. Even more, it provides a politically acceptable vehicle for advocacy: those who control the option-framing process can make the strongest possible case for their own point of view, provided that they are willing and able to do full justice to points of view with which they may personally disagree.

To better engage the public, shift from the goal of “science literacy” to the goal of reaching sound “public judgment” on scientific issues, and use specialized forms of dialogue to advance this goal. Although framing options for top-level decisionmakers is a necessary condition for winning greater influence, it is not sufficient. Important policy changes also require broad-based public support. At present, though, the voters are largely disengaged, reluctantly abandoning decisions that affect their lives to experts they do not trust. Scientists hold the key to breaking the deadlock, at least on science-laden issues. But to do so, they need to rethink the goals and strategies of public engagement.

Part of the problem, as mentioned earlier, is that scientists persist in thinking that the goal of public engagement is to raise the level of scientific literacy. This assumption misses the point. Citizens do not need to be second-hand scientists. But they do need to be able to make sound judgments about science policy choices, ranging from global warming and genetically modified foods to nuclear proliferation and human cloning.

Bringing about sound public judgment requires two distinctly different steps. The first is to get the issue onto the forefront of the public agenda, endow it with urgency, and present a range of choices for dealing with it. The second, more difficult, step is to engage the public with sufficient intensity and focus to achieve resolution.

Our society has excellent mechanisms for the first step: placing issues before the public. The media, as well as political and civic leadership, are highly skilled at raising awareness of key issues, as can be seen in the increased public concern about global warming. Awareness by itself, however, is not enough. All too often, the media beat the drums for an issue, get people aroused, and then abandon it for the next issue, leaving the public hanging and the issue unresolved. Moving people beyond awareness to judgment and resolution is far more arduous. It requires considerable “working through” as the public seeks to reconcile possible courses of action with their own deeply held beliefs and habits.

Global warming, for example, is stalled at the threshold of this phase: awareness of the issue is growing, but thus far the public has resisted coming to terms with the tradeoffs involved in any serious solution. Should we permit an international agreement such as the Kyoto treaty to constrain our domestic policies? Is a push for alternative fuels worth the high cost of the investment? Should our control of carbon dioxide emissions be so stringent that it limits economic growth? The public must come to judgment on these and similar questions of values before any sustainable policy can be put into place.

Unfortunately, our society lacks effective institutions for taking this second step, especially on science-laden issues. The media are not equipped to do it, nor are most political leaders, who operate through advocacy rather than through encouraging the public to make up its own mind. Scientists, however, are potentially well equipped for this task. With a certain amount of instruction and experience, a small cadre of scientists could, if sufficiently determined to do so, establish a new, more robust model of public engagement.

This model would adapt for the general public the strategy of framing policy options described earlier. The scientists’ role would be twofold. First, they would formulate a range of policy options and scenarios for science-laden issues, paying special attention to the pros and cons of each and keeping in mind the public’s primary concern: How does this affect me, my community, and my world? Then scientists would collaborate with experts in public dialogue in presenting these scenarios to random samples of citizens. A number of organizations (my own company among them) have developed innovative methods for accelerating the working-through process with citizens. We utilize special forms of dialogue that encourage participants to engage issues with unprecedented depth and intensity. These special citizen dialogues predict the likely direction public opinion will take once the larger population has understood the tradeoffs associated with complex scientific issues. These citizen dialogues give leadership the insight into public priorities and values they need in order to engage the full electorate.

A new career path

Science is one the very few fields where individuals make major contributions at a young age. Thus, many scientists find themselves at a crossroads relatively early in their careers. Most choose to continue as working scientists, but others are ready to consider attractive alternatives. I believe that the role of scientist as bridge builder and policy formulator offers an appealing alternative for those drawn to top-level decisionmaking and the give and take of public life. For those with a conceptual bent, this alternative would be far more attractive than administrative work.

Top-flight working scientists are ideal candidates for this kind of high-level policy involvement. Not only can they provide a depth of expertise that is sorely lacking in generalists’ discussions of scientific issues, they can also help their scientific colleagues understand the importance of nonscientific perspectives to their own work and the future of their field.

Many scientists will probably prefer to keep their focus on their scientific work, and others will find shifting back and forth between their own worldview and that of the larger society not worth the effort. But with even a small, committed cadre of high-level scientific thinkers, I believe that science can once again make itself heard about the issues that affect our collective future, both at the policymaker level and the level of public discourse. In doing so, they will be making an innovative contribution to our society. Indeed, if we are to avoid disaster, we have no choice.

Archives – Spring 2003

Science Advisory Board 1933

In July 1933 President Franklin D. Roosevelt issued an executive order to establish the National Research Council’s (NRC’s) Science Advisory Board (SAB), a body set up to address scientific problems of the various government departments. The SAB’s first task was to survey the overall relationship between science and the government. After this and other initial successes, Roosevelt issued another executive order in 1934 to broaden the SAB’s membership.

By 1935, problems with the SAB’s makeup became apparent. Because the board was, in effect, appointed by the president, concerns arose that it was too vulnerable to political control. In addition, the SAB’s anomalous position as a government-appointed group within the NRC created jurisdictional problems with the National Academy of Sciences (NAS). The SAB was allowed to expire at the end of its initial charter in late 1935, and its functions were assumed by a more broadly representative NAS Committee on Government Relations. This committee had limited influence and was eventually dissolved in October 1939. Although the SAB was relatively short-lived, it did set the pattern for subsequent large-scale NAS-NRC efforts in providing policy advice.

Our photograph shows the first meeting of the SAB, held in late August 1933. Seated from left to right are Isaiah Bowman, SAB Chairman Karl T. Compton, W.W. Campbell, and John C. Merriam. Standing are Robert A. Millikan, C.K. Leith, and future NAS President Frank B. Jewett.

Cybersecurity: Who’s Watching the Store?

With information technology (IT) permeating every niche of the economy and society, the public has become familiar with the dark side of the information revolution–information warfare, cybercrime, and other potential ways nefarious parties might try to do harm by attacking computers, communications systems, or electronic databases. The threats people fear range from nuisance pranksters abusing the World Wide Web, to theft or fraud, to a cataclysmic meltdown of the information infrastructure and everything that depends on it. As IT becomes more tightly woven into all aspects of everyday life, the public is developing an understanding that disruption of this electronic infrastructure could have dire–conceivably even catastrophic–consequences. During the past decade, government officials, technology specialists, policy analysts, industry leaders, and the general public have all become more concerned about “cybersecurity”–the challenge of protecting information systems. Prodigious efforts have been expended during this time to make information systems more secure, but a close examination of what has been achieved reveals that we still have work to do.

The threat to information systems potentially takes many forms. Experts generally offer four different “attack modes”: denial, deception, destruction, and exploitation. Or, to put it another way, someone can break into an information system to stop it from operating, insert bogus data or malicious code to generate faulty results, physically or electronically destroy the system, or tap into the system to steal data. Experts also agree that such threats can come from a variety of sources: foreign government, criminals, terrorists, rival businesses, or simply individual pranksters and vandals.

Many people associate cybersecurity with the Internet revolution of the 1990s. In fact, the idea of information warfare directed at computer networks dates back to 1976, to a paper written in the depth of the cold war by Thomas Rona, a staff scientist at the Boeing Company. Rona’s work was an outgrowth of electronic warfare in World War II and the introduction of practical computers and networks. He speculated that in the emerging computer age, the most effective means to attack an adversary would be to focus on its information systems.

Rona’s research came at a propitious moment, because the Department of Defense was itself just beginning to consider whether such tactics might be a silver bullet for defeating the Soviet Union. This interest had been triggered, ironically enough, by Soviet military writings. The Soviets believed the United States was preparing for radioelektronaya bor’ba–“radio electronic combat.” As it turned out, U.S. capabilities were not nearly as far along as the Soviet writers feared. But once U.S. officials discovered that Soviet officials were concerned about computer attacks, they began to look into the possibilities more closely.

The payoff occurred in the 1991 Gulf War, the first conflict in which U.S. commanders systematically targeted an adversary’s command and control systems. These efforts were an important reason for the U.S.-led coalition’s lopsided victory. After the war, when U.S. officials realized how important this “information edge” had been, they started to worry more about the vulnerability of its own electronic networks.

Throughout the early 1990s the Defense Department examined this threat more closely. The closer officials looked, the more worried they became. They were especially concerned about the vulnerability of U.S. commercial systems, which carry the vast majority of military communications. One of the first unclassified studies was the Report of the Defense Science Board Task Force on Information Warfare-Defense (IW-D), which the Defense Department released in November 1996. This report was followed by other studies that reached similar conclusions about the cybersecurity threat.

Largely as a result of these studies, in May 1998 the Clinton administration issued Presidential Decision Directive 63, which directed federal agencies to take steps that would make their computers and communications networks (in addition to other critical infrastructure) less vulnerable to attack. It also led to the establishment of several measures intended to address the threat to the commercial sector. These included:

  • appointment of a national coordinator for security, infrastructure protection and counter-terrorism on the National Security Council staff with responsibility for overseeing the development of cybersecurity policy;
  • establishment of the National Infrastructure Protection Center in the Federal Bureau of Investigation (FBI), which was responsible for coordinating reports of computer crime and attacks so that the federal government could respond effectively; and
  • establishment of the Critical Infrastructure Assurance Office (CIAO) to coordinate the government’s efforts to protect its own vital infrastructure, integrate federal efforts with those of local government, and promote the public’s understanding of threats.

Computer security received even more attention in the late 1990s because of the “Y2K problem”–the possibility that at least some computers would fail when their software and internal clocks mistook the year 2000 for 1900. Also, a series of high-profile viruses such as Melissa and Love Bug and computer hacking cases such as that of Kevin Mitnick caught the attention of the press. When combined with the fact that millions of Americans became personally familiar with computers–and everything that could go wrong with them–cybersecurity was transformed from an esoteric topic familiar only to computer specialists and military thinkers to a public policy issue of widespread concern.

The Clinton administration appointed Richard Clarke, a National Security Council staff member, to the post of national coordinator for security, infrastructure protection and counterterrorism. Clarke directed the development of the National Plan for Information Systems Protection Version 1.0: An Invitation to a Dialogue, a 199-page document released in January 2000. According to the CIAO, the plan “addressed a complex interagency process for approaching critical infrastructure and cyber-related issues in the federal government.”

The new Bush administration agreed with the Clinton administration about the importance of cybersecurity and began a review of cybersecurity policy when it entered office in January 2001. In October 2001, the administration issued Executive Order 13231, which established a new effort for protecting information systems related to “critical infrastructure,” including communications for emergency response.

The Bush administration also began an effort to develop a new cybersecurity strategy. It retained Clarke as special adviser for cyberspace security within the National Security Council. A draft of the new strategy was released to the public in September 2002. The administration held a series of public hearings among representatives from government, industry, and public interest groups during the next several months and released The National Strategy to Secure Cyberspace in February 2003.

How effective?

The new cybersecurity strategy has five components:

  • a cyberspace security response system–a network through which private sector and government organizations can pool information about vulnerabilities, threats, and attacks in order to facilitate timely joint action;
  • a cyberspace security threat and vulnerability reduction program, consisting of various initiatives to identify people and organizations that might attack U.S. information systems and to take appropriate action in response;
  • a cyberspace security awareness and training program, consisting of several initiatives to make the public more vigilant against cyberthreats and to train personnel skilled in taking preventive measures;
  • an initiative to secure the governments’ cyberspace, which includes programs that state and federal government agencies will take to protect their own information systems; and
  • national security and international cyberspace security cooperation–initiatives to ensure that federal government agencies work effectively together and that the U.S. government works effectively with foreign governments.

Although many of these individual initiatives are probably valuable, the approach of the current plan, like that of its 2000 predecessor, lacks at least three features taken for granted in most other areas of public policy. This may be the most fundamental shortcoming of U.S. policy for cybersecurity up to now.

First, the assessment of the threat, and thus the strategy’s estimates of the potential costs of inaction, is largely anecdotal. The strategy also lacks a systematic analysis of alternative courses of action. As a result, the new strategy cannot provide a clear comparison of the costs and possible benefits of the various policies it proposes.

Second, the strategy lacks a clear link between objectives and incentives. Economic theory holds no opinion on whether people are inherently well-meaning or evil; rather, it simply assumes people respond to incentives. That is why a clear, rational incentive structure is the cornerstone of any effective public policy. Unfortunately, the cybersecurity strategy lacks incentives. In a related vein, it also lacks a component that is closely related to incentives: accountability. There is no mechanism in the policy that holds public officials, business executives, or managers responsible for their performance in ensuring cybersecurity.

The greatest threat may simply be the economic harm that would result if the public loses confidence in the security of information technology in general.

Third, the strategy rejects regulation, government standards, and use of liability laws to improve cybersecurity in toto. These are all basic building blocks of most public policies designed to shape public behavior, so one must wonder why they are avoided in this case.

The rationale for the current strategy is that it avoids regulation and government-imposed standards to ensure that U.S. companies can continue to innovate, remain productive, and compete in world markets. This statement, however, overlooks another basic fact about public policy: Such policies always must reconcile individual profitability and economic efficiency with security, which has some of the characteristics of a public good. It is precisely because there are competing interests that policymakers must strike the right balance–not reject such measures completely, as the current strategy does.

Ideally, the new cybersecurity strategy would have established an analytical framework that explained how it selected some options and rejected others. Instead, the current strategy merely gives a laundry list of activities that may be excellent ideas or a total waste of effort but that bear no relationship to the severity of the threat and provide no link between proposals and priorities.

What is the need?

The most important question to ask in addressing any public policy issue is: What problem needs to be solved? Yet despite all the attention that cyberattacks receive in the media, there is little hard data for estimating the size of the cybersecurity threat or for calculating how much money is already being spent to counter it.

The data gap begins with the government. According to the General Accounting Office, the federal government spent $938 million on IT security in 2000, just over $1 billion in 2001, and $2.71 billion in 2002. However, the data do not tell us how much is being spent on different kinds of security measures. Moreover, there is no way to determine from the data whether all government agencies keep track of IT security spending in the same way.

Publicly available data on private IT security spending is, if anything, even less reliable and harder to come by. According to the Gartner Group, a leading IT consulting firm, worldwide spending on security software alone totaled $2.5 billion in 1999, $3.3 billion in 2000, and $3.6 billion in 2001. Once spending on personnel, training, and other aspects of information security is considered, total IT security spending could be substantially more. But the bottom line is that neither government nor private sector statistics on IT security spending are terribly useful for the kind of analysis that is common in most other policy sectors.

The most often cited source for IT security data is probably the FBI-sponsored survey published by the Computer Security Institute (CSI), a San Francisco-based membership organization for information security professionals. In 2002, CSI sent its survey to 503 computer security practitioners in U.S. corporations, government agencies, and financial institutions. The survey asked respondents about the security technology they use, the types and frequency of attacks they had experienced, and the losses associated with these attacks.

Needless to say, this is a very small percentage of all computer networks and hardly a scientific sample. Yet the greatest shortcoming of the CSI survey is that it lacked reliable procedures for uniformity and quality control. Each respondent decided for itself how to respond. One company might estimate financial damages from cybercrime with data from its accounting department, using insurance claims and actual write-offs on its balance sheets. Another might provide a gut estimate from a systems operator who monitors network intrusions. In either case, CSI did not require substantiation.

Even so, fewer than half the respondents in the 2002 CSI survey (44 percent) were willing or able to quantify financial losses due to attacks, which means that the data that are provided are almost certainly statistically biased. This is suggested by the results of the survey, which should raise questions even at face value. For example, survey responses from 1997 to 2002 indicate that the number of attacks in some categories has been constant or falling, even though the number of potential targets during this time grew exponentially. Similarly, the total cost of these attacks soared, despite the fact that companies were more aware of the cyberthreat and were spending more to protect themselves.

In recent years CSI has conceded weaknesses in its approach and has suggested that its survey may be more illustrative than systematic. Nevertheless, government officials and media experts alike freely cite these and other statistics on the supposed costs of cybercrime, even when the estimates fail the test of basic plausibility. For example, in May 2000 Jeri Clausing of the New York Times reported that the Love Letter virus caused $15 billion in damage. Yet the most costly natural disaster in U.S. history–Hurricane Andrew, which in 1991 swept across Florida and the Gulf Coast–caused $19 billion in damages. Moreover, this figure reflected 750,000 documented insurance claims, plus tangible evidence of 26 lost lives along with the near-total destruction of Homestead Air Force Base and an F-16 fighter aircraft. Are we to believe that one virus has almost the same destructive power of a Class 1 hurricane? Similarly, during a February 2002 committee hearing, Sen. Charles Schumer (D-N.Y.) cited a report claiming that the four most recent viruses caused $12 billion in damage. By comparison, the Boeing 757 that crashed into the Pentagon on September 11 caused $800 million in damages. Could the four viruses cited by Schumer really cause 15 times as much damage?

In reality, analyzing the damage of most network intrusions is time-consuming and expensive, which is why it has rarely been done on a large scale. To analyze an attack on a computer network, someone must review logs and recreate the event. Even then, sophisticated attackers are likely to stretch their attacks over time, use multiple cutouts so a series of probes cannot be traced to a single attacker, or leave agents that can reside in a system for an extended time–all making analysis harder. Logically, the trivial attackers are the ones most likely to be detected and the sophisticated ones are most likely to go unrecorded.

Without an exhaustive research program, which has not been carried out, the exact scope and nature of the cyberthreat contains enormous uncertainty. The current strategy proposes to identify threats, but it does not propose to collect reliable data that would define the threat. This lack of data is not an argument for ignoring cyberthreats. However, when the available data contain this much uncertainty, dealing with the uncertainty of the threat must be an integral part of the strategy. A prudent policy will focus on the most certain threats that have high probability and high potential costs. It will hedge against the less certain, less dire threats, and include mechanisms that direct efforts toward those areas with the greatest payoff and limit the resources that might inadvertently be spent on wild goose chases.

In our view, the greatest threat may simply be the economic harm that would result if the public loses confidence in the security of information technology in general. A threat that is only slightly less pressing is the possibility that a foreign military power or terrorist group might use the vulnerability of an information system to facilitate a conventional attack. The possibility that a purely electronic attack might cause a widespread collapse of information systems for a prolonged period with large costs and mayhem is possible, but a second-order concern, if only because potential attackers have other alternatives that are easier to use, cheaper, and more likely to be effective in wreaking havoc.

The government role

The most significant feature of the role set forth for government in the current cybersecurity strategy is that it arbitrarily precludes action common in other regulatory domains. It also defines a role that is dubious at best. For example, the strategy states, “In general, the private sector is best equipped and structured to respond to an evolving cyberthreat.” This is also true in other regulatory domains such as occupational safety. No government body is responsible for issuing grounding straps and face goggles. Unfortunately, the strategy ignores a basic fact of regulation: Although implementation is left to the private sector, the government has a large role in setting standards, designing regulations, and enforcing these measures.

There is a good economic rationale to consider changes in liability law that would give software and hardware companies some responsibility.

The strategy goes on to say that the federal government should concentrate on “ensuring the safety of its own cyberinfrastructure and those assets required for supporting its essential missions and services.” It also says that the federal government should focus on “cases where high transaction costs or legal barriers lead to significant coordination problems; cases in which governments operate in the absence of private sector forces; resolution of incentive problems…and raising awareness.” Alas, the government itself has a dubious record. As recently as February 2002 the Office of Management and Budget identified six common government-wide security gaps. These weaknesses included lack of senior management attention, lack of performance measurement, poor security education and awareness, failure to fully fund and integrate security into capital planning, failure to ensure that contractor services are secure, and failure to detect and share information on vulnerabilities.

In other words, more than six years after the Defense Science Board’s IW-D study and three years after the government’s first cybersecurity plan, most government agencies have yet to take effective action. This is hardly an argument for making government the trailblazer in security.

The reality of the situation is that the government is poorly suited for providing a model for the private sector. Government bureaucracies (not necessarily through any fault of their own) have too much inertia to act decisively and quickly, which is what acting as a model requires. Because of civil service tenure, government agencies lack an important engine of change found in the private sector, namely the ability to replace people inclined to act one way with people who are inclined to act another. Also, government agencies are locked into a budget cycle. In most cases, a year is required for an agency to formulate its plan, another year is needed for Congress to pass an appropriation, and a third year is required for an agency to implement the plan–at a minimum. This is why government agencies today are rarely at the leading edge of information technology. There is no reason to believe cybersecurity will be an exception to the rule.

The bulk of the responsibility for “securing the nets” will inevitably fall to the private sector because it designs, builds, and operates most of the hardware and software that form the nation’s information infrastructure. This is why the strategy’s determined avoidance of regulation and incentives is so misguided.

Organizations such as the information security analysis centers that the government has encouraged industries to establish are valuable for coordinating action against common threats, such as viruses and software holes. Larger response centers such as the Computer Emergency Response Team/Clearance Center at Carnegie Mellon University can play a similar role for the information infrastructure as a whole. However, the ownership and operation of the information is simply too diffuse to deal with real-time hacking and more serious cyberthreats through any kind of centralized organization. Cybersecurity is a problem requiring the active participation of scores of companies, hundreds of service providers, thousands of operating technicians, and millions of individual users.

The most effective way to shape the behavior of this many people is by setting broad ground rules and making sure people play by them. Anything else amounts to trying to micromanage a significant portion of the national economy via central control. Two central questions must be addressed. First, what kind of incentives will be effective at providing additional security? Second, how can we begin to design systems that will provide an efficient level of security–that is, one that yields a level of security where the difference between benefits and costs is maximized?

Policy options

A number of options should be on the table for designing more effective cybersecurity.

Better use of standards by the government and the private sector. The government may consider developing standards for software protocols for the future Internet that are more secure. These could include, for example, software that limits anonymity or requires “trust relationships” in multiple components of a network. (A trust relationship is one in which a user must identify herself and demonstrate compliance with technical standards before, say, she can gain entry to a database or use software.)

At a minimum, the government should consider playing a more active role than it does now in setting standards. Currently government policy is biased against intervening in the standard-setting process. Yet that is exactly what it should be doing when market forces left to themselves do not provide sufficient security to the country as whole, which many experts believe is the case at present. Despite the claims of critics who want to “keep the information frontier free,” the government, in fact, was a main contributor to the development of the current Internet, including the processes that resulted in current standards.

In addition to developing standards for software that are more secure, the IT industry should also consider developing more rigorous security standards for operations and software development. These should address both outside threats, such as hackers, and inside threats, such as sabotage and vandalism inside a company. After the rise in virus attacks and hacking incidents in 2000-01, some companies (most notably, Microsoft) announced that they would make security a higher priority in the design of their products. Critics of these efforts have complained that they were inadequate and were often disguises for marketing strategies designed to impede competition. Whatever the merits of these criticisms, they illustrate how government could serve as an honest broker–if it takes a more active role. Such standards could be voluntary or enforced through regulations. The more important point is to ensure that someone establishes “best practices” for industry and government that can be flexible for a variety of users but still provide a legal hook for liability.

Better use of regulation. In some cases the government may want to issue regulations to establish minimal acceptable security standards for operators and products. These would be cases in which the market has clearly failed and government action is required to address situations in which there are inadequate incentives or other factors preventing the private sector from developing these standards by itself. It may also want to develop an approach that requires firms to certify in their annual reports that they have complied with industry best practices.

Liability. Many computer and software makers have generally fought changes in the liability laws. A key argument is that an increase in liability has the potential to reduce innovation in the fast-moving IT sector: true enough, but only if the changes are poorly crafted or go too far. There is a good economic rationale to consider changes in liability law that would give software and hardware companies some responsibility, so that they have an incentive to increase the amount of attention paid to security issues.

Liability represents a big step over many of the voluntary measures that are being advocated now, which we doubt in most cases are likely to be adequate in addressing the problem. The strength of liability is that it is a market mechanism that is much more efficient in shaping the behavior of millions.

Reforming IT liability is, in effect, a market-style measure to promote better security by providing those best positioned to take action with the incentive to do so. In this same vein, the government should consider measures that would require corporations to tell their stockholders whether there are significant cybersecurity risks in their business and to certify that they are complying with industry standards and best practices to address them.

Clearly, one size does not fit all when it comes to cybersecurity. The kinds of measures appropriate for a Fortune 500 corporation are probably inappropriate for a start-up company operating out of a garage. The best approach is probably to let the market, combined with reasonably defined roles of legal responsibility, to tailor an optimal solution. But for this to happen, government will need to remove the obstacles that prevent the market from doing this currently, and to play a role in those cases in which a “public goods” problem dissuades companies and consumers from acting.

Research. Most important, we need to recognize that we are largely flying blind at this point in a public policy sense, because we have such a limited understanding of the costs of cybersecurity attacks and the benefits of preventive measures. The government should sponsor research on this subject–research that, up to this point anyway, the private sector has been unwilling or unable to conduct. It should also develop mechanisms for systematically collecting information from firms (with appropriate privacy protections) that would allow the government to help develop a better strategy for addressing cybersecurity in the future.

Security and privacy. Finally, public officials must learn how to balance privacy and security, and public policy analysts must do a better job of explaining the balance between these two goals. Simply put, technology often leaves no practical means to reconcile privacy and security.

For example, a trusted IT architecture, in which only identified or identifiable users can gain access to parts of a computer or network, inherently comes at the expense of privacy. A user must provide a unique identifier to gain access in such a system, and this naturally compromises privacy. Even worse, the data that a network uses to recognize a trusted user often can be used to identify and track the user in many other situations.

On the other hand, technology that guarantees privacy usually presents some insurmountable problems for security. The classic example is strong encryption. Because it is impossible for all practical purposes to break strong encryption, a person using it can conceal his communications, thus ensuring privacy. But such protection also can make it impossible to trace criminals, terrorists, hostile military forces, or others who would attack computer networks.

One way of addressing this problem is to concede the technical threat to privacy and use strict laws and regulation to compensate. This, of course, was the idea behind key escrow, which government authorities proposed in the mid-1990s as an alternative to completely eliminating restrictions on encryption. Under the proposed system, third parties would hold the “keys” to a cipher (actually, the means to break the cipher via a back door). Under certain specified conditions, the third parties could be ordered to provide the keys.

The U.S. government (in particular, law enforcement and intelligence organizations) took an imperious approach to the issue, which proved foolhardy because, in fact, they could not control the spread of encryption even if they tried. At the same time, the IT industry adamantly resisted the proposal, arguing that foreign customers would not buy “crippled” U.S. software or hardware, and thus opposed any restrictions. In the end, the technology did prove beyond control, and the net result was soured relations between government and industry, which continue even today.

Rather than focus on whether or not to control a particular technology, society would often be better off addressing the consequences of abuse of the technology. There are numerous precedents for such an approach. For example, technology allows people to track their own rental records at the local Blockbuster, but laws provide assurances that such use carries substantial penalties. Similarly, trusted systems could be required in specific applications (e.g., financial institutions, critical infrastructure). Then people could be given the choice of whether they wished to use those networks. Other systems could be non-regulated (common e-mail). Laws could ensure that privacy was protected–and that users who tried to enter a system without complying with disclosure requirements were criminally liable. These regulations should be enforced in a way that engenders public support. One approach is to have a non-political, bipartisan governing body that makes sure government enforces these standards and does not abuse its own access to personal data.

A leadership role

Designing a cybersecurity policy is not simple. There is very little good information on the costs of cybersecurity attacks and the benefits of proposed policy measures. The problem is extremely complicated because of the IT infrastructure, the large number of users, and the diverse nature of potential attackers.

Addressing this problem will take economic insight and political courage. Given the complexity of the problem, we think a variety of policy instruments should be used, including voluntary standards, regulation, and liability. The challenge for policy research is to develop deeper insights about the precise nature of the cybersecurity problem and the costs and possible benefits of different policy interventions.

The challenge for politicians is to give more than lip service to this issue. That means taking a leadership role in communicating the importance of the problem and defining a mix of government and private-sector strategies for dealing with it in a manner comparable to that found routinely in other areas of public safety and homeland security.

At a minimum, funding of serious, comprehensive research on the size of the problem and the benefits and costs of policy measures should be relatively noncontroversial and beneficial. Somewhat harder is educating the public about difficult trade-offs that need to be faced. At some point, we are likely to find, for example, that security cannot be enhanced without making some sacrifice in other features, such as ease of use or total assurance of privacy and anonymity. Such tradeoffs should not be swept under the rug but rather discussed as part of a continuing dialogue over the best way to approach this difficult problem.

Computer technology

The Defense Advanced Research Projects Agency’s (DARPA’s) Strategic Computing program was a 10-year billion-dollar initiative to advance “machine intelligence” by pushing forward in coordinated fashion technologies for powerful computer systems that could support human intelligence or, in some cases, act autonomously. DARPA, part of the Department of Defense (DOD), supported R&D on computer architectures and gallium arsenide integrated circuits, as well as on systems intended to appeal to the military services–a “pilot’s associate” for the Air Force, battle management software for the Navy, and robotic vehicles for the Army. Many different firms and university groups participated.

Alex Roland, a Duke University history professor, and Philip Shiman provide a detailed narrative of events inside the Strategic Computing program, drawing from interviews and archival sources on its origins, management vagaries, and contracting. As signaled by the titles of the first three chapters, which are named for DARPA managers, the authors focus on individuals more than on institutions or policies. Technologies themselves are not treated in much depth. Readers with some knowledge of computer science or artificial intelligence will find signposts adequate to situate the Strategic Computing program with respect to the technological uncertainties of the time, such as the need for massive amounts of “common sense” knowledge to support expert systems software. Other readers may wish for more background.

There are a few errors. We are told, disconcertingly, that President Ronald Reagan “practically doubled defense spending in his first year.” In fact, defense outlays rose by 18 percent from 1981 to 1982 (9 percent in inflation-adjusted dollars) and by lesser increments in subsequent years. More significantly for the Strategic Computing program itself, the authors appear to be unaware that the Office of the Secretary of Defense had awarded a near-monopoly on R&D on silicon integrated circuits (the “target” which gallium arsenide circuits would have to surpass) to a different DOD initiative, the Very High Speed Integrated Circuit program. Most observers took this bureaucratic directive to be the reason why DARPA steered its program funding to gallium arsenide chips, rather than gallium arsenide’s intrinsically higher resistance to the ionizing radiation created by nuclear blasts.

These and other small blemishes detract only a little from a book that seems exemplary for what it is: a product of the history of technology, a field in which Roland (also a military historian) is widely known. Like much else from historians, it will probably not satisfy readers interested in government policy. For example, the authors do not try to give a sense of funding levels for similar work within DARPA, much less work sponsored by other agencies, either before or after the injection of “new money” appropriated by Congress–perhaps the first question readers concerned with policy would ask.

More generally, the authors’ analytical apparatus seems attuned to academic debates over technological determinism and “the complex question of how and why great technological change takes place.” The idea seems to be that readers will, at the book’s end, be able to reach their own interpretative conclusions. Those lacking the shared assumptions of historians of technology may be unequal to the task. And, after more than 300 pages, many of them concerned with the twists and turns of DARPA management, I was left unsure what to make of the program itself. The authors, for their part, do not give a concise, plainly stated verdict, perhaps because historians tend to view such attempts as reductive oversimplifications. Readers who know little about DARPA will pick up a good deal of incidental knowledge. Otherwise, I suspect the book will be of greatest interest to those who wish to know more about this particular program, along with scholars who share the authors’ academic concerns.

Strategic Computing contrasts sharply with another of Roland’s books, Model Research, published in 1985, which provided a history of the National Advisory Committee on Aeronautics (NACA). In the earlier book, the accretion of historical detail builds to a clear picture of an agency that self-destructed through excessive conservatism. Roland showed why, long before the near-panic that followed the 1957 Soviet Sputnik launches, NACA had lost the trust of high-level federal officials. In the aftermath of Sputnik, few policymakers thought NACA a viable candidate for the task of bringing order to the many competing U.S. missile and space programs. Instead, the government created two new agencies: DARPA, administratively established by Defense Secretary Neil H. McElroy over opposition by the military (which did not want an R&D organization that would be outside their control), and the legislatively established National Aeronautics and Space Administration, which absorbed NACA. After DARPA, unable to hold on to its coordinating role within DOD was forced to invent a new mission, the agency became the home for visionary, long-term military R&D and prototyping.

By the 1980s, DARPA was widely viewed as one of the government’s most effective R&D organizations, credited with substantial contributions to military technologies, including stealth, and dual-use technologies especially in computing. Yet when Roland and Shiman take us inside the agency’s celebrated Information Processing Techniques Office–known for sponsorship of the ARPANet, mother of the Internet–the reality is unsettling. The Strategic Computing program’s goals changed again and again: “At least eight different attempts were made to impose a management scheme,” the authors say. What began as a broad effort to push forward technologies related to artificial intelligence ended, after about 1990, submerged within the multiagency High Performance Computing and Communications initiative. This effort was structured around development of supercomputers and networks for number-crunching–a far stretch from advancing machine intelligence.

What measure success?

Despite the waste motion associated with the Strategic Computing program’s changes in direction and the unanswerable question of what its outcomes might have been if DARPA had stuck closer to the original structure and objectives, it would be misleading to label the undertaking a failure. There are few metrics for judging the effectiveness of technology programs with such heterogeneous agendas. The relationships between the program’s technical goals, intended to be mutually supporting, were complicated and necessarily shifted over time as earlier uncertainties were resolved and new problems appeared. DARPA’s task was doubly complicated by the need to satisfy the military services, which have limited tolerance for R&D unless closely coupled with foreseeable applications to major weapons systems.

The Strategic Computing program included some research and some development. For research in disciplines that are reasonably well characterized and have frontiers marked by professional consensus, social processes provide the equivalent of evolutionary selection in nature: Scientists vote with their citations and their choices of further research directions. In design and development, where the objective is a concretely conceived good or service delivered to some final customer, selection takes place through market mechanisms. In the case of military systems for which no market exists, experience in the field and the integration of technical systems with military doctrine and operational planning accomplish the eventual winnowing. (A case in point: After the Air Force declined to fly its B1-B bombers during the 1991 Gulf War, few observers could believe that this particular fruit of the Reagan defense buildup had been worth the $30-plus billion expended.) The Strategic Computing program was all of these things, yet none of them to the extent needed for straightforward evaluation.

It is too early to look for many results in DOD’s procurement pipeline, given that it takes 15 or 20 years for new systems to reach the field while the Strategic Computing program ended only a decade ago. So far, civilian spin-offs appear to be few. Of what the authors call the program’s quintessential artifact–the massively parallel computers built by Thinking Machines Corporation before its bankruptcy–fully 90 percent went to federal agencies or DOD contractors, rather than commercial customers. Nonetheless, spin-offs may lie ahead. Long lead times continue to characterize the maturing of complex technologies, notwithstanding short technology cycles in microelectronics. After all, the ARPANet and its successors grew slowly for three decades, virtually unknown outside the user community, until the Internet burst into public consciousness in the mid-1990s. That artificial intelligence has been disappointing enthusiasts for four decades does not necessarily mean that its future holds nothing but further frustration.

Since World War II, the U.S. government has supported computing and related technologies through the policies and programs of many different agencies and subagencies both inside and outside of DOD. These policies and programs go far beyond R&D. Defense spending fostered the early creation of a flourishing research-oriented intellectual community centered on electrical engineering and what is now known as computer science, along with occupational communities of programmers and practice-oriented computer engineers and systems analysts. Regulatory interventions through antitrust spurred the growth of independent software firms from the late 1960s. From the beginning, government agencies have been major purchasers of both hardware and software; although no accounting is available, DOD may now be spending $50 billion or more per year on software alone, including maintenance and upgrades.

As hard as they are to weigh, the impacts (positive and negative) and interactions of these and other policies, which are often uncoordinated and sometimes contradictory, have been a powerful force for innovation in computing and information technologies over more than 50 years. The Strategic Computing program, with its multiple shifts in direction and internal conflicts, resembles the larger U.S. policy structure in microcosm. To some observers, federal policies and programs in their profusion and confusion may seem not only unsystematic but wasteful. Yet as demonstrated most recently by the Internet, the government’s actions remain a major spur, as they have been since the 1940s, to the sprawling family of technologies so often acclaimed as the source of a third industrial revolution (or a first postindustrial revolution). Fruits of the program may show up in unexpected places in the years ahead. Strategic Computing will provide a starting point for identifying them.

Reducing toxic risks

In Deceit and Denial, Gerald Markowitz and David Rosner provide a carefully documented history of the rapid growth in the use of toxic substances by U.S. industry, juxtaposed against the tragic story of the much slower growth of scientific knowledge and public information about their risks. The authors focus on two illustrative cases–the use of lead in a variety of consumer products and the use of vinyl chloride in plastics–and they raise important policy questions. When such substances are introduced, how can public authorities foster expeditious and objective research on health effects? If health effects remain uncertain, when do scientists and manufacturers have an obligation to alert the public to possibilities of risk? More broadly, should government adopt a precautionary principle that forbids the introduction of new substances until they are proven safe?

Unfortunately, this is also a book dominated by villains and heroes. The lead and plastics industries stand accused of systematically hiding information about the toxic characteristics of their products for decades, manipulating science, engaging in false advertising, and generally misleading the public. The authors contend that such deception is typical of U.S. industry: “Lying and obfuscation were rampant in the tobacco, automobile, asbestos, and nuclear power industries as well,” they write. Government regulators, public health authorities, environmental groups, and journalists are the heroes who brought knowledge to the public and ultimately restrained industrial use of the toxic substances. But beyond pointing to this perceived interplay of forces, the authors do not attempt to answer most of the questions they raise. They simply call for better public access to industry research about toxic chemicals and raise the idea of prohibiting the marketing of new chemicals until they are proven safe.

In developing their cases, the authors–who are forthright in acknowledging that they have been employed by plaintiffs’ lawyers as expert witnesses and have gained access through those relationships to many of the documents used in the book–paint with a remarkably broad brush. They maintain, for example, that “Industry was well aware of the dangers of lead throughout the nineteenth century,” adding that trade associations have damaged the nation’s democratic institutions and that industry’s calls for more scientific evidence have often served as a stalling tactic.

The simplistic character of this diatribe does not do justice to the importance and complexity of the policy issues highlighted by these valuable histories. One is struck by how early scientists suspected and publicized possible links between exposure to lead and workers’ health problems, as well as by how long it took for consensus to emerge about the character and seriousness of those risks. In the early years of the 20th century, lead had gained wide use in the pipes, canned goods, dishes, and other conveniences eagerly sought by increasingly urban and prosperous U.S. citizens. By 1908, Alice Hamilton, a physician associated with Hull House in Chicago, was reporting on studies that linked exposure to lead with miscarriages among female factory workers. Government action followed relatively quickly. A decade later, congressional committees, public health authorities, organizations representing workers, and companies themselves took steps to protect workers from exposure to high concentrations of lead.

But understanding the long-term effects of low-level exposure to lead and informing the public about this risk took much longer. Just as workers began to receive protection, automobile companies added tetraethyl lead (ethyl) to gasoline in order to eliminate engine knock. A blue-ribbon panel in the mid-1920s recommended regulation of leaded gas but concluded that evidence was insufficient to warrant banning its sale. Similarly, scientists began to link lead-based paint to lead poisoning in the 1920s and 1930s, prompting some companies and consumer groups to discourage its use on children’s furniture and toys. It was not until the 1960s, however, that scientists agreed on the long-term neurological effects of inhaling or ingesting relatively small quantities of lead from paint and adopted a uniform definition of what constituted lead poisoning in children.

By the time government authorities acted, markets already were changing to reduce lead use. When cities began to regulate lead-based paint in the 1950s, zinc had replaced lead as the primary pigment in paint, and industry was phasing out the use of lead in interior paint. Federal action followed city and state action. It was not until 1971 that the federal government banned paints with more than 1 percent of lead for use in interiors and on some exterior surfaces of homes built with federal funds. As scientific evidence was debated and regulation grew, some companies and trade associations financed vigorous advertising campaigns that touted the virtues of lead-based paint. The authors consider this shameful.

A similar pattern characterized the introduction of vinyl chloride in the manufacture of plastic wiring, pipes, and inexpensive replacements for wood and metal. Identification of acute effects on workers was relatively rapid, whereas scientific understanding of the problem and public access to information concerning long-term risks of low exposure proceeded slowly. The authors retrace the long road from the development of early safety standards for workers in the 1930s to increasing evidence of links of low levels of exposure to cancer in animals in 1970s and finally to understanding of cancer risks to consumers from drinking alcohol and other liquids from certain plastic containers.

But what to do?

Throughout these histories, the authors emphasize their dominant themes: Most research was financed by industry; health effects on workers received far more attention than effects on the general public; industry trade associations persisted in advertising the positive qualities of their products while scientific evidence accumulated concerning their risks; industry sometimes hid evidence from workers and government officials; and the increasing sophistication of labor representatives, environmental groups, and public health authorities in assembling scientific evidence was critical to efforts to tighten policy.

What the authors think should be done about all this is not clear. On the one hand, they seem to be arguing for marshaling more objective research to identify toxins that threaten public health and safety. On the other, they suggest that calls for more research delay needed regulation. They raise the prospect of making policy by the precautionary principle of proven safety before use, but fall short of fully embracing this principle. The only firm recommendation is for better public access to information.

In fact, the cases do not support the idea that industry speaks or acts in unison when deciding whether and how to use toxic substances. In these situations, as in many others, competitors have varied interests that policymakers can take advantage of to improve public health. Makers of lead paint competed for market share (unsuccessfully) with makers of alternative paint mixtures even before the government took action, in part because of mounting public concern. Decades before the ban on leaded gas, a competitive battle raged between producers of leaded and unleaded gas. Makers of unleaded gas called attention to possible health effects of leaded gas, and makers of automobiles gradually abandoned their alliance with the lead industry as new technology reduced the agent’s benefits in promoting efficient combustion. Sometimes companies did exercise self-restraint. The Ethyl Gasoline Corporation stopped the production of leaded gasoline until scientific issues were resolved, after a government-sponsored conference heard evidence about possible health effects. Plastics manufacturers called for more attention to the development of safety standards than their trade association was willing to support, and the companies ultimately formed a new association.

The authors’ view that industry is a deceitful monolith keeps them from addressing some of the public policy questions that urgently need attention. Determining the health effects of toxic substances and taking appropriate regulatory action remain formidable challenges. Analytic methods continue to improve, enabling neurologists and toxicologists to measure ever smaller amounts of toxins in the bloodstream. But scientists also increasingly understand that workers and members of the public have variable responses to toxins. Establishing causal relationships remains problematic, many widely used chemicals remain incompletely tested, and determining the safety of a single chemical can cost hundreds of thousands of dollars.

One question that needs attention is how to inform the public about scientific uncertainty in estimates of health and safety risks. Congress decreed in 1986 that manufacturers disclose their discharges of toxic chemicals facility by facility and chemical by chemical. But today, companies still report their discharges only in total pounds, which tells community residents little about risk or scientific certainty.

Such issues are not unique to communicating the risks of toxins. The federal government’s recently adopted disclosure system to inform the public about the likelihood that specific models of sport utility vehicles will roll over uses probability ranges to rank vehicles. The Internet, of course, offers new opportunities to layer information to provide citizens with access to narratives describing scientific uncertainty. Regulators might also create standardized graphics to signal whether scientific uncertainty is high, medium, or low, and tell something about its character.

This is a book to be read with caution. Rosner and Markowitz have performed a valuable service by collecting these histories, but have done a disservice to their own work by focusing mainly on industry deceit and ignoring many of the important questions that their narratives raise.

Stale environmentalism

As a U.S. senator in the 1960s, Gaylord Nelson was clearly ahead of his time. His prescience is nicely illustrated in a letter (reprinted in Beyond Earth Day) that he sent to John F. Kennedy in 1963 offering the president advice for his planned “resources and conservation tour.” The urgency of the environmental situation was so pronounced, Nelson informed Kennedy, that “we have only another 10 to 15 years in which to take steps to conserve what is left.” By the end of the decade, such sentiments would be commonplace, but in 1963 they were exceptional. As a senator, Nelson pushed for far-reaching environmental reform. He is perhaps best known for spearheading the original Earth Day in 1970. The present book, written with the assistance of Susan Campbell and Paul Wozniak, asks how we might “fulfill the promise” of environmental restoration given on that day.

The results are uneven. Nelson offers a useful overview of environmental progress over the intervening decades–minimal though he considers it to be–as well as perennially important reminders of how serious the crisis still is. As Nelson demonstrates, although a number of relatively simply environmental problems have been adequately addressed, the more complicated and potentially disastrous ones, such as those posed by global warming, are more often disregarded. As in 1963, Nelson’s take-home message is that we must act strongly and immediately or all might be lost. Many of his proposals for “achieving sustainability”–ranging from transitioning to solar power to the holding of comprehensive environmental hearings in Congress–are well considered and worthwhile.

But although Beyond Earth Day may contain a good measure of wisdom, it can hardly be cited for its originality. Little has changed in Nelson’s thinking since 1963, and what appeared so fresh and iconoclastic then seems rather stale today. One can, after all, easily find dozens if not hundreds of books employing the same arguments, highlighting the same statistics, and offering the same prescriptions. The hectoring tone of such works, the endlessly repeated warning that now is the crucial time, with any delay in enacting massive reforms portending disaster, has lost its edge. Although we may need constant reminding of the severity of global environmental problems, one more book on the subject, even one by Gaylord Nelson, will make little difference.

The fact that such a pressing issue as global warming has been largely ignored in this country perhaps indicates that new approaches are needed. As Nelson himself shows, the U.S. public overwhelmingly supports environmental protection, but it also consistently prioritizes economic issues. Such polling data would lend support to an economically sophisticated environmentalism, one seeking as much synergy as possible (while not ignoring the intrinsic tradeoffs) between ecological protection and economic prosperity. Such a vision is not provided here. Nelson’s only concession is to proclaim that “the economy is a wholly owned subsidiary of the environment”–a truism unhelpful for policy debates. His own outmoded economic theory is encapsulated in his view that “the whole economy” is nothing more than the natural resource base. Apparently, technology, knowledge, and human capital in general are to count for nothing. Not surprisingly, Nelson ignores the substantial body of work in environmental economics that has emerged during the past 30 years.

Intriguingly, however, Beyond Earth Day actually begins with some arrestingly fresh assertions about the relationship between environmental protection and economic policy. But these comments were penned not by Gaylord Nelson but by Robert F. Kennedy, Jr., in the book’s foreword. Kennedy argues that, contrary to public perception, federal environmental regulation actually “reimposes the free market” while simultaneously “protect[ing]…private property rights.” He comes to such contrarian conclusions by focusing on how regulation forces polluters to internalize the negative externalities that they impose on unwilling neighbors and other rights holders. Although no doubt rather stretched, such thinking does have the potential to break through the dichotomy between environmental and economic values, as well as that between private and public goods, that so often stymies efforts at genuine environmental reform.

Old wine, old bottle

Unfortunately, one sees relatively little of such novel thinking in the book itself. Instead, one is fed a large serving of the standard green vision of moral simplicity, pitting virtuous and selfless environmentalists against selfish corporate interests and their benighted right-wing apologists. Although such a stark comparison is indeed often apt, it hardly captures the full scope of contemporary environmental debate. Nelson, for example, portrays “environmental extremism” as little more than a chimera invented by the opposition in order to mislead a gullible public; he is apparently unwilling to admit that Luddite extremists even exist, much less to acknowledge that their actions often play directly into the hands of anti-environmentalists. He is similarly loath to admit that working ranchers, farmers, and loggers sometimes have legitimate grievances against certain environmental regulations, much less that some ranchers are actually responsible stewards who effectively save land from subdivision. As a result, Nelson overlooks another new form of environmentalism that has arisen over the past 30 years, one that works with rather than against such stakeholders at the local level, seeking the conservation of land as well as livelihood.

Perhaps Nelson and his colleagues do not consider these complexities significant enough to merit consideration, or perhaps they simply do not want to alienate hard-core greens and other stalwarts of the left who are ever ready to denounce anything smelling of compromise or conciliation with “brown” forces. But on one issue, that of population growth in the United States, Nelson eagerly assails the political left and, by extension, a sizable component of the green movement itself. Population growth at current rates, fueled largely by immigration, will yield a country of over 500 million inhabitants within the next 70 years and up to one billion by sometime in the next century. Nelson is dismayed by these numbers, cogently arguing that they will place intolerable burdens on the nation’s ecosystems. Support for high levels of immigration, he implies, comes largely from the multicultural left, a group that has effectively “silenced much-needed discussion of the issue” through “charges of ‘nativism,’ ‘racism,’ and the like.” Even “the great American free press,” Nelson warns, has been “frightened into silence” by rampant “political correctness.”

Nelson’s understanding of the political dynamics behind this country’s population policy is far from complete–and oddly so. He does not even mention what is probably the most important reason why immigration rates remain so high: the fact that corporate leaders, a group that he is otherwise keen to excoriate, generally favor permeable borders. Business executives and mainstream conservatives tend to favor rapid immigration in order to reduce the cost of labor and to militate against unionism; labor organizers, by the same token, have historically been eager to constrain immigration. Nelson might have used this issue to make common cause with labor, a segment of the traditional left that has been estranged from the environmental movement. The opportunity, however, was not taken.

Nelson does deserve praise for highlighting the domestic population issue so insistently. Genuine debate on the topic is rare indeed, as both major political parties have apparently concluded (for different reasons) that current levels of immigration are in their, if not the country’s, best interest. But even so, Nelson’s stance is disconcertingly extreme, hostile to virtually all immigration. “Adding population,” he blankly informs us, “hasn’t improved American society [or] the economy”–oblivious to the gains realized from increasing cultural diversity and from the entrepreneurial strivings of newcomers. Similarly, his analysis of migration’s push factors is so beclouded by green ideology as to be of little value. People flee from the Philippines to the United States, he tells us, because of “diminished croplands” at home, just as migrants leave India due to “ravaged” ecosystems. In actuality, there is no good evidence for cropland diminution in the Philippines, much less for it pushing people out of the country. And certainly most U.S.-bound Indian emigrants are middle-class urbanites seeking greater economic opportunity, often in the high-tech sector.

Gaylord Nelson’s environmental message still bears repeating, familiar though it is. But the approach that it entails, focused on such endeavors as intensified environmental education, the inculcation of environmental ethics, the holding of congressional hearings, and the presidential delivery of an annual state of the environment address, could hardly prove adequate to the task. We have had a surfeit of eco-preaching during the past 30 years, yet when it comes to the truly serious environmental issues of the day we find ourselves thwarted. New, more inclusive approaches, aimed at a much broader audience, therefore seem necessary. Perhaps the best starting point would be to reduce the level of sanctimony in environmentalist discourse and to drop the habit of regarding critics of the green orthodoxy as corporate hirelings deserving only censure. Surely some of the criticisms leveled by such thoughtful authors as Bjorn Lomborg and Ronald Bailey merit careful consideration. Until the environmental community gains the maturity to engage its critics in serious and respectful debate, I doubt whether its promises can ever be fulfilled.

Weapons and Hope

Weapons and Hope

As Freeman Dyson observes in his book Weapons and Hope, scientists and engineers have a complex relationship with military weapons. Advances in science have over the years made possible ever more powerful weapons, and scientists and engineers have been intimately involved in applying new scientific developments to the design of more advanced weapons. Acutely aware of the harm can be done by new weapons, scientists and engineers have also been among the leaders of efforts to curb development of new weapons, particularly nuclear weapons.

Dyson has applied his impressive knowledge of history and literature as well as his personal insights into humanity to the problem of nuclear proliferation. We need Dyson’s wisdom and a good deal more to deal with the array of military technologies that now confront us. In some ways, nuclear weapons were simple. They were big bombs possessed by big countries. Today’s weapons can be microscopically small, and they are available to virtually all countries and even to underground terrorist groups.

Dyson wrote about some of the new threats associated with biotechnology and nanotechnology in a recent article in the New York Review of Books in which he recounted a debate between himself and Bill Joy, cofounder and chief scientist of Sun Microsystems. Joy attracted attention for an article he published in Wired that argued that scientists should forego certain areas of research because they could lead to applications so dangerous that we cannot afford to risk their coming into existence.

Dyson disagreed with the suggestion that areas of knowledge shoulld be avoided. Although he acknowledged the same dangers that alarmed Joy, he argued that knowledge is not the problem. Human beings can choose how to use knowledge, and he recounted the history of biologists assessing the potential dangers inherent in new genetic knowledge and taking responsible action to eschew some applications of the new biology without curbing further scientific progress. He says that Joy’s precautionary stance is wrong not to weigh benefits as well as risks. As Dyson has remarked before, in the early stages of any major scientific development we are simply incapable of predicting the extent of its risks or benefits. He concludes that it is wrong to deny ourselves the potential benefits. We should proceed, and when dangers emerge, we should deal with them at the time.

Dyson has warned that: “There is nothing so big nor so crazy that one out of a million technological societies may not feel itself driven to do, provided it is physically possible.” But he is an unapologetic enthusiast for the potential of human curiosity and ingenuity to make our lives better. He argues strongly for vigilance and responsibility, but he ultimately believes in hope. The authors in this issue share Dyson’s approach. They do not indict the interconnectedness of the wired world or the power of biotechnology; they explore the details of how technology is used and seek creative ways to minimize the harm that new technologies can cause.

In his defense of progress, Dyson turns to a surprising ally, the seventeenth century English poet John Milton. In 1644, many members of Parliament wanted to ban a technology that they believed was being used by religious fanatics to corrupt souls and ultimately to overthrow the existing political order. In his essay “Areopagitica” Milton agreed that the technology possessed the potential for great mischief, but he argued that the potential benefits were too great to be ignored. He maintained that the technology should be developed and that its misuse could be regulated later. That technology was the unlicensed printing.

Forum – Spring 2003

Climate research

As an old-timer in the climate policy arena, I applaud the article by Roger Pielke, Jr., and Daniel Sarewitz (“Wanted: Scientific Leadership on Climate,” Issues, Winter 2003). For the past decade, I have maintained that climate change research must follow the guiding principle that we shall have to live with uncertainty. The details of the future of the climate system and its interaction with the global socioeconomic system are indeed unpredictable. Nevertheless, important messages can be extracted from current research findings. In trying to do so, more attention must be paid to issues such as decisionmaking under uncertainty, the economics of climate change, and the role of institutions and vested interests in the battles for control among the actors on the climate change scene. The key challenges will be to stimulate genuine interaction between decisionmakers and social science researchers and to focus sharply on the most relevant social and political issues.

However, there is also a need for the climate system research community to give priority to analyses and summaries of present knowledge that are of more direct and immediate use for the development of a strategy to combat climate change. For example, how quickly are actions required, and what burden sharing between industrial and developing countries would be fair and most effective in efforts to increase mitigation during the next few decades?

The warming observed so far (about 0.6°C over the entire globe and 0.8°C over the continents) is probably only about half of what is in the making because of greenhouse gases already emitted to the atmosphere. The effect of human activity on climate is thus to a considerable degree hidden. The inertia of the climate system cannot be changed. Ironically, air pollution provides some protection against global warming, but preserving pollution is obviously not the answer to the problem. On the contrary, for health and other reasons, efforts are under way to reduce smoke and dust. Global warming of about 1.5°C is therefore unavoidable.

The global society has been slow to respond to the threat of climate change. Developing countries give top priority to their own sustainable development and argue that primary responsibility belongs with the industrial countries, which so far have produced about 75 percent of total CO2 emissions with only 20 percent of the world’s population. Past capital investments in these countries mean, however, that the costs of rapid action are considerable. Human activities have already boosted greenhouse gas concentrations considerably. Unless forceful action is taken to limit emissions, it seems likely that greenhouse gas concentrations will reach a level that will result in an average global temperature increase of at least 2°C to 4°C. The climate change issue needs much more urgent attention, not least to clarify what the effects might be.

Considerably more than half of the emissions of greenhouse gases still come from industrial countries. Even if they would reduce their emissions by 50 percent during the next 50 years, in order to prevent a doubling of the CO2 level in the atmosphere, the developing countries would have to restrict their per capita emissions to about 40 percent of what the industrial countries emit today. It is obvious that a global acceptance of the Kyoto Protocol would be only a small first step towards an aim of this kind. The antagonism between rich and poor countries will only become worse the longer the industrial countries delay in taking forceful action to reduce carbon emissions.

BERT BOLIN

Professor Emeritus

University of Stockholm

Sweden

Past Chairman of the UN Intergovernmental Panel on Climate Change


We take strong issue with the claims of Pielke and Sarewitz, who state, “What happens when the scientific community’s responsibility to society conflicts with its professional self-interest? In the case of research related to climate change the answer is clear: Self interest trumps responsibility.”

This outrageous and unsupported statement is egregiously wrong. It suggests that climate scientists pander to bureaucratic funding agencies, distorting or misrepresenting their results in order to attract research funds. The falseness of this is documented by the readily available publication records of the undersigned. These show that our scientific positions on important issues in the science of climate change have evolved over time. This evolution is both a response to and reflection of the community’s developing understanding of a complex problem, not a self-interested response to the changing political or funding environment.

The basic driver in climate science, as in other areas of scientific research, is the pursuit of knowledge and understanding. Furthermore, the desire of climate scientists to reduce uncertainties does not, as Pielke and Sarewitz claim, arise primarily from the view that such reductions will be of direct benefit to policymakers. Rather, the quantification of uncertainties over time is important because it measures our level of understanding and the progress made in advancing that understanding.

Of course, it would be naïve to suppose that climate scientists live in ivory towers and are driven purely by intellectual curiosity. The needs of society raise interesting and stimulating questions that are amenable to scientific analysis. It is true, therefore, that some of the results that come from climate science are policy relevant. It is also true that scientists in the community are well aware of this. It is preposterous, however, to suggest that climate science is primarily policy driven.

The irony of Pielke and Sarewitz’s article is that it criticizes climate scientists in order to promote research areas in which the authors themselves are engaged. One could easily interpret this as an example of self-interest trumping the responsibility that all scientists have of presenting a fair and balanced view of the issues. The positive points in their article, although not new, are sadly diluted by their confrontational approach. Their false dichotomies only divide the natural and social sciences further, whereas more cooperative interactions would benefit both.

TOM WIGLEY

National Center for Atmospheric Research

KEN CALDEIRA

Lawrence Livermore National Laboratory

MARTIN HOFFERT

New York University

BEN SANTER

Lawrence Livermore National Laboratory

MICHAEL SCHLESINGER

University of Illinois at Urbana-Champaign

STEPHEN SCHNEIDER

Stanford University

KEVIN TRENBERTH

National Center for Atmospheric Research


The state of California takes climate change seriously. Since 1988, when then-Assemblyman Byron Sher sponsored legislation focused on the potential risks of climate change, California has pursued the full range of responses: inventory and assessment, scientific research, technological development, and regulations and incentives. Our interest in the relationship of science to policy led to extensive review of, and comments on, the draft strategic plan of the federal Climate Change Science Program. Many of our comments parallel the observations of Roger Pielke, Jr. and Daniel Sarewitz.

California starts with the premise that climate change is real and threatens costly potential impacts on water, energy, and other key economic and environmental systems in the state. The potential response to climate change includes both adaptations that are already underway and mitigation of greenhouse gas (GHG) emissions. The real policy issue before us, and therefore the target for scientific research, is the appropriate size and mix of investment in these two general categories of response.

We agree with Pielke and Sarewitz that regions and states are central to any adaptation strategy. The unique geography of California, a Mediterranean climate region within the United States, leads to impacts that may not manifest themselves elsewhere in the nation. The overwhelming dependence of California on the Sierra Nevada snow pack and long-distance transfers of water, much of which passes through the complex estuary of the San Francisco Bay-Delta area, is an example. We expect that effective adaptation will flow from policy decisions at the regional and state level rather than at the national level.

The policy context for adaptation within the state is found in major strategic planning efforts such as the State Water Plan; the State Transportation Plan; the newly authorized Integrated Energy Policy Report; the California Legacy Project, which focuses on land conservation priorities; the overarching Environmental Goals and Policy Report; and in the guidance that the state provides to local jurisdictions for land use planning. However, simply focusing attention on the issue is not sufficient. These plans will only be effective to the extent that climate science can provide these agencies with climate scenarios that describe a range of possible future climates that California may experience, at a scale useful for regional planning. Reducing uncertainty in projections of future climates is critical to progress, and we will actively pursue the help of the federal science agencies.

With respect to mitigation, the question is not if, but rather how. How do we lower our dependence on fossil fuels in a manner that stimulates rather than harms our economy? We do not find “decarbonization” of the economy to be inherently in conflict with economic growth and job creation–quite the opposite. To this end, we have aggressive programs researching emerging technology and the economics of its deployment, active incentive programs that will move our state toward more renewable energy and higher energy efficiencies, and regulatory programs that will lower GHG emissions from the transportation sector. Reducing uncertainty in the costs and benefits of these programs is central to our efforts, but our commitment to reducing GHG emissions is clear.

Of course, we know that reducing California’s GHG emissions will not by itself stabilize the climate or reduce the amount we must spend on adaptation. But to argue against mitigation at the state or regional level on this basis is to misunderstand the larger historic role of California in providing environmental leadership, which is often adopted nationally and in some instances worldwide. The move away from a carbon-based economy is not a project to be understood solely through marginal economic analysis but rather as a historic transition from the long chapter of dependence on industrial fossil fuel and its associated pollution toward newer, cleaner, and hopefully more sustainable energy sources. California is pleased to be charting the course toward this new future.

WINSTON H. HICKOX

Agency Secretary

California Environmental Protection Agency

Sacramento, California

MARY D. NICHOLS

Agency Secretary

California Resources Agency

Sacramento, California


Roger Pielke and Daniel Sarewitz impressively highlight the marginal inutility of the quest for ever more uncertainty-reducing research on climate change. They expose the error of delaying hard policy choices by hiding behind scientific uncertainty, while they resist the temptation of advocates who, by exaggerating climate dangers, only discredit the case for action. Dispassionate observers now recognize that, notwithstanding uncertainties, we already know enough about climate risks to justify meaningful action.

What kinds of action? As the authors correctly observe, mitigating potentially serious climate change implies decarbonizing the global economy. We should begin reducing CO2 emissions, not because the ill-conceived Kyoto Protocol says so, but because it makes sense from the perspective of both risk management and national security. Serious measures to reduce energy inefficiencies and to promote much greater energy conservation would reduce dangerous dependence on Middle East oil. Many major corporations have demonstrated that energy efficiency and conservation are feasible and profitable. Removing subsidies for fossil fuels, investing more in renewable energies, imposing challenging fuel standards on our gas-guzzling vehicles, and tightening energy efficiency codes for new construction and appliances would all send the right signals.

However, the century-long transformation of the world’s energy system will require a technological revolution comparable to the conquest of space. We cannot rely on the market, with its short time horizon, to come up with needed investments in research and infrastructure. This is quintessentially a task for government. A $5/ton carbon tax, translating into little more than a penny a gallon at the gas pump, would yield over $8 billion–enough to quadruple the annual public sector energy R&D budget. New energy technologies will facilitate political decisions–and private investments–to limit emissions. And like the space program, a major research initiative would yield political benefits by generating jobs and commercial spin-offs.

Yet the root of the current situation is absence of political will. The insistence on resolving uncertainties is a mask for policymakers who have already made up their minds that no contentious actions should be undertaken on their watch. Here I feel Pielke and Sarewitz place an unrealistic burden on scientists themselves to break the impasse.

Two decades ago, efforts to avert destruction of the ozone layer also faced powerful political and economic opposition, both here and abroad. Nevertheless, the United States played the central role in negotiating the 1987 Montreal Protocol to drastically reduce use of ozone-depleting chemicals. I now believe that a crucial difference between the ozone and climate issues is the degree of public concern over potential dangers, which reinforced the warnings of scientists. Even before the U.S. ban on chlorofluorocarbons in spray cans, consumer demand for these products dropped by two-thirds. Never underestimate the power of the consumer: The idea that extraterrestrial radiation could wreak havoc with our genes made the politicians take notice.

Interestingly, when President Reagan later overruled his closest friends and approved the State Department’s recommendation for a strong ozone treaty, he had recently been operated on for skin cancer. Today, absent a revelational climate experience at the top, the responsibility devolves again to us in our myriad consumer choices. A mass boycott of SUVs might have a powerful effect on climate policy, but I see no sign of such public concern.

RICHARD BENEDICK

Joint Global Change Research Institute

Pacific Northwest National Laboratory

Ambassador Benedick was chief U.S. negotiator of the Montreal Protocol and is the author of Ozone Diplomacy.


The arguments presented by Roger Pielke, Jr., and Daniel Sarewitz that effective action on global change is being hindered by climate scientists and that climate science will not reduce uncertainties in a manner that is useful for decisionmakers are basically flawed.

Pielke and Sarewitz do raise three valid and important issues: (1) the needs of many decisionmakers have played too small a part in the agenda of research on global change; (2) “uncertainty” has been used extensively as part of the calculus of avoiding action, despite the fact that our society routinely makes decisions in the face of uncertainty; (3) focused research related to energy policy, impacts, and human responses to change has been avoided or poorly supported. These issues stand out as fundamental weaknesses in the way we have approached climate and global change research.

Unfortunately, Pielke and Sarewitz have detracted from this important message with two arguments:

  1. Self-interest trumps responsibility: The problem occurs because the primary beneficiaries of the research dollars related to “uncertainty” are the scientists themselves. As early as 1988, the Committee on Global Change (CGC) called for more human dimensions research in order to be “more useful and (to provide) effective guides to action.” During the 1990s, the discussions within CGC and its partners (such as the Board on Atmospheric Sciences and Climate and the Climate Research Committee), as well as our discussions with federal agencies, envisioned a human dimensions and policy component that matched, in total dollars, the physical sciences budget of the U.S. Global Change Research Program (USGCRP). This sense continues right up to recent reports by the Committee on Global Change Research: the so-called “pathways” report and the report on “putting global and regional science to work.” The National Research Council report, Our Common Journey, which includes climate scientists as authors, stands as a masterful and compelling call for a new research agenda. The USGCRP effort, Climate Change Impacts on the United States: Overview also offers a research agenda. In this report, the fifth item on the list of priorities calls for improvements in climate predictions. The preceding priorities are tied directly to assessing impacts and vulnerabilities, examining the significance of change to people, examining ecosystem response, and enhancing knowledge of how societal and economic systems will respond. The facts are that throughout the USGCRP tenure, the federal political leadership never agreed to put the dollars in place to link our sciences to policy, despite the recommendations of the scientists that Pielke and Sarewitz take to task. This continues today. It is impossible for this community to avoid seeing that the Climate Change Impacts assessment has been excluded from the first draft of the new Climate Change Science Program Strategic Plan. Yet it was clearly proposed in the workshops and discussions that preceded the release of the draft plan.
  2. More research will not yield a useful reduction in uncertainties. The authors are overreaching by suggesting that a reduction in uncertainty won’t aid decisionmakers and that it will take “forever” for the Climate Change Science Program to produce simple, tangible recommendations. The proof is the history of weather forecasting. Fifty years ago, weather forecasting was thought to be so full of uncertainty as to be practically meaningless. But fortunately these scientists did not give up their research in order to focus on the science of decisionmaking and of risk assessment, as the authors seem to suggest for climate research. The investment in weather forecasting, although it took decades, has yielded a remarkable system of observing systems and predictive models that have huge value for a very broad set of decisionmakers. Climate science has similar potential.

We need to accept Pielke and Sarawitz’s concern that we have a fundamental flaw in U.S. climate and global change research and that this flaw must be addressed. However, to suggest that the flaw is due to professional self-interest or to a lack of potential for climate research to aid society is unwarranted and ignores history.

ERIC BARRON

Penn State University

University Park, Pennsylvania


What has $20 billion spent on climate change research since 1988 bought us? Roger Pielke, Jr., and Daniel Sarewitz claim that we have purchased quite a bit of academic understanding, but not much that helps policymakers. We must remember, however, that much more science is yet to come from this huge investment. And creating and discovering information is the foundation of higher living standards and environmentally sensible economic development. (As one who is paid to study climate by using satellites, I admit to “professional self-interest.”)

Have we helped policymakers? Probably not, I would agree. Policymakers know that dealing with climate change by, for example, increasing energy costs, harms those who can least afford it. So in my view, to avoid giving their opponents in the next election any ammunition, policymakers tend to do no harm, especially on issues beset by uncertainty. Though much will be said in the coming election cycle about global warming, I believe that energy will remain affordable–good news to the many poor people of my state.

Pielke and Sarewitz are correct to call for direct investment in finding new energy sources. Energy, although remaining affordable for the masses, will be gradually decarbonized in the coming century, just as transportation was “de-horsified” in the last. The federal role here should be to encourage discovery of the path of decarbonization with carrots (such as funding), not sticks. As the authors imply, regulating CO2 emissions (a regressive and expensive stick) to lower levels that are somehow politically and economically tolerable will not perceptibly affect whatever the climate will do

In terms of funding for climate research, the authors suggest a shift from the present “reducing uncertainties” emphasis to, as they say, “developing options for enhancing resilience.” As a state climatologist who deals daily with issues of climate and economic development, I find that there is no real payoff from knowing a climate model’s probability distributions for whether the temperature will rise one degree in the next X or Y decades. What matters is reducing vulnerabilities (enhancing resilience) to the climate variability that we know now exists: the storm that floods a town, a three-month drought in the growing season, a hurricane that wipes out expensive condos at the beach.

How do we Americans enhance environmental resilience? Widening and renaturalizing our channelized (usually urban) waterways, removing incentives for building cheaply on the oceanfront, improving and expanding wastewater treatment plants, developing water policy that accommodates the extremes–these are common-sense actions whose benefits are tangible and to which climate research would directly apply.

In the developing world, enhancing resilience is tied to a more basic set of initiatives that promotes human rights (especially for women), allows transformation to functional governance, reduces energy generation from biomass burning (wood and dung), develops infrastructure to withstand climate extremes, and so on.

If scientific leadership on climate directs research priorities in some way to help achieve these goals, our taxpayers will receive a real bargain.

JOHN R. CHRISTY

Director, Earth System Science Center

University of Alabama, Huntsville


Roger Pielke, Jr. and Daniel Sarewitz’s call to arms prompts an inevitable question: Who will show scientific leadership in the climate-forcing debate? A review of recent developments suggests that action is occurring independently of federal directives or strategies.

The recent 2001 Intergovernmental Panel on Climate Change (IPCC) report has catalyzed a general consensus that although climate dynamics are improperly understood, increases in tropospheric temperatures are a potentially serious global environmental threat. Even federal agencies seem to have come to the conclusion that global warming has the potential to disrupt both human habitats and natural ecosystems.

For more than 10 years, states have been formulating climate policy in the absence of a complete understanding of climate dynamics or vulnerability impacts. A range of programs have been or are in the process of being developed and implemented. They target renewable energy, air pollution control from both stationary and mobile sources, agricultural and forestry practices, and waste management strategies. A robust climate change discussion can occur at the state level with less dissension than within federal agencies.

Independent of regulatory or policy efforts, domestic and foreign corporations are seeking competitive advantage through the introduction of innovative product lines. These products create demand through product differentiation to acquire customers who value environmental performance, through reducing resource use that subsequently lowers operational costs (such as fuel efficiency), or through early conformity with regulatory requirements. In each case, businesses are using risk management strategies that balance market uncertainties with the need to create long-term value for environmentally friendly product lines.

Such state and corporate activities suggest that distributed approaches to climate-forcing mitigation may be as influential on federal policymakers as the scientific community continues to be.

DAVID SHEARER

Chief Scientist

California Environmental Associates

San Francisco, California


Future highways

In “Highway Research for the 21st Century” (Issues, Winter 2003), Robert E. Skinner, Jr. does a very good job of highlighting the importance of research in the highway field and explaining the unique role that the Federal Highway Administration (FHWA) plays in a decentralized highway research community. The FHWA is committed to providing leadership to a nationally coordinated research and technology (R&T) program, championing the advancement of highway technological innovation, and advancing knowledge through research, development, training, and education.

The FHWA’s leadership role in conducting research to address national problems and advancing new technologies to serve the public is directly related to its stewardship role in using national resources wisely. Stewardship requires that we continue to find ways to meet our highway responsibilities to the public by efficiently delivering the very best in safe, secure, operationally efficient, and technically advanced highway facilities, while meeting our environmental responsibilities. Since FHWA does not own or operate this country’s highway system, providing leadership and working through partnerships are key to our success.

In response to our own agency assessment of our R&T business practices and the recommendations of Transportation Research Board (TRB) Special Report 261, The Federal Role in Highway Research and Technology, the FHWA currently has a major corporate initiative underway to raise the bar for research and deployment of technology and innovation. This effort includes increased stakeholder involvement in our R&T programs and achieving even greater collaboration with other members of the R&T community.

Throughout its history, the FHWA has supported fundamental long-term research aimed at achieving breakthroughs, identified and undertook research to fill highway research gaps, pursued emerging issues with national implications, and shared knowledge. This important work will continue. It is essential that we support and manage our R&T programs so that they continue to produce the innovative materials, tools, and techniques to improve our transportation system.

As part of the reauthorization process, the U.S. Department of Transportation is proposing resources to invest in R&T in order to further innovation and improvements that are critical in meeting our highway responsibilities to the nation in vital areas such as safety, congestion mitigation, and environmental stewardship. As a point of clarification and amplification on Skinner’s article, those proposing the Future Strategic Highway Research Program (F-SHRP) are not intending it to be a prescription for the FHWA’s future R&T program. Rather, F-SHRP is a special-purpose, time-constrained research program that is intended to complement the FHWA’s R&T program and other national highway R&T programs. If approved by Congress, F-SHRP would concentrate additional research resources on a few strategic areas to accelerate solutions to critical problems.

Delivering transportation improvements that are environmentally sound and provide Americans with the mobility and safety that they have come to expect in a timely manner is no small task. The key to success for our national highway R&T program is a solid partnership among federal, state, and local government, the private sector, and universities. We look forward to working with TRB and all of our partners in carrying out a national R&T program to achieve these goals.

MARY E. PETERS

Administrator

Federal Highway Administration

Washington, D.C.


Aquatic invasive species

Allegra Cangelosi (“Blocking Aquatic Invasive Species” (Issues, Winter 2003) describes legislation that my colleagues and I introduced to prevent the introduction of aquatic invasive species into our ecosystems and to control and eradicate them once they are here. Aquatic invasive species pose a major threat to our economy and environment. It is imperative that Congress act swiftly to pass this important legislation. I would like to expand on one important piece of this effort that I have drafted, which is critical to the success of the legislation: the research portion.

In many ways, the federal government has failed to prevent aquatic invasive species from invading our waterways. Much of this failure occurred because when Congress passed the 1990 and 1996 laws dealing with aquatic invasive species, research was simply an afterthought. Yet science must underpin management decisions if these decisions are going to be effective and considered credible by the outside world. In this bill, we strive to fix this and provide for the necessary research, so that agencies can effectively and cost-efficiently carry out their management mandates.

In the Aquatic Invasive Species Research Act, we establish a comprehensive research program. The legislation I have drafted directly supports the difficult management decisions that agencies will have to make when carrying out our new management program. For example, when agencies develop standards for ships, they must ask: What is the risk that ballast water poses to our environment? What about ship hulls and other parts of vessels? Are our current management decisions working? This legislation sets up a research program to answer these and other difficult questions.

To protect our environment and our economy, it is critical that we prevent the introduction of aquatic invasive species to U.S. waters and eradicate any new introduction before the species can become established. Prevention requires careful, concerted management, but it also requires good research. For example, it is impossible to know how to prevent invasive species from entering the United States without a good understanding of how they get here. The pathway surveys called for in this bill will help us develop that understanding. We cannot screen planned importations of non-native species for those that may be invasive without a thorough understanding of the characteristics that make a species invasive and an ecosystem vulnerable, a profile that would be created in this bill. Finally, we cannot prevent invasive species from entering our waters through ships’ ballasts without good technologies to eradicate species in ballast water. This bill supports the development and demonstration of technologies to detect, prevent, and eradicate invasive species.

Preventing aquatic invasive species from entering U.S. waters and eradicating them upon entry are critical to our economy and environment. Good policy decisions depend on good scientific research. By focusing heavily on research in our effort to combat invasive species, Congress can ensure that the best decisions will be made to reduce the large and growing threat that aquatic invasive species pose.

REP. VERNON J. EHLERS

Republican of Michigan


I read with great interest Allegra Cangelosi’s article. Her discussion of the history and current status of ballast water exchange and treatment issues is quite enlightening. It is particularly important that she discusses several non-ballast water issues that are being also addressed in the reauthorization of the National Invasive Species Act through the draft bill entitled the National Aquatic Invasive Species Act (NAISA). Individuals and agencies involved with invasive species are becoming increasingly aware that non-ballast water vessel-mediated pathways may contribute as much to invasive species transport as ballast water. For example, organisms attached to hulls, especially those involved in coastal (nontransoceanic) traffic, are very likely to be transported alive and be able either to drop off the hull into a new environment or shed reproductive material that will have a great likelihood of surviving.

It is generally accepted that the most effective approach to addressing invasive species is to stop them before they arrive; thus the emphasis placed on ballast water management. In addition, it is vital to identify and fully understand other pathways by which nonindigenous species arrive in the United States or travel domestically from one area of the country to another. Short of interdiction of nonindigenous species, early detection and rapid response are the most important safeguards we have available. If incipient invasions can be detected, the potential for effective rapid response to eradicate or control the subsequent spread of the nonindigenous organism is increased, thus short-circuiting the evolution to invasiveness and the associated negative impacts. Beyond those tools, we are left with complex and expensive control measures in an attempt to minimize the negative impacts of fully invasive species. These are important issues that are also addressed in the draft NAISA, which is expected to be reintroduced in 2003.

Cangelosi’s article provides very concise, no-nonsense information to the public regarding a very complex and serious ecological and economic threat to the security of our nation. It is heartening to see that members of Congress are becoming aware of the pervasiveness of invasive species and are proposing ways to increase our ability to address this very serious problem. As Cangelosi points out, there is still a chance that NAISA could be stalled, limiting our ability to do what needs to be done. We should all realize that the price tag for doing nothing about invasive species is much larger than the cost of measures that would allow us a fair opportunity to take positive action. We should all take a great interest in the passage of this bill when it is reintroduced.

RON LUKENS

Assistant Director

Gulf States Marine Fisheries Commission

Ocean Springs, Mississippi


Although Allegra Cangelosi’s article touches on other pathways for the introduction of aquatic nonindigenous species, such as fouling of hulls and sea chests, the focus is clearly on ballast water. This is appropriate, as ballast water currently appears to be the most important vector for aquatic invaders. This situation could change, however, because of the development of increasingly effective tools for managing ballast water and the removal from the marketplace of some effective treatments for fouling. The International Maritime Organization will soon ban, because of their impact on the environment, the application of highly toxic antifouling paints containing tributyltin, and eventually all ships currently using these coatings will have to be stripped or overcoated. Alternative hull treatments of lesser toxicity may also control fouling less effectively or may employ booster biocides whose environmental effects are poorly understood. More frequent hull cleaning may not be an option, because the act of cleaning can itself introduce invaders. The National Aquatic Invasive Species Act of 2002 lists a set of best management practices to control introductions via fouling but provides no framework for developing alternative treatment systems.

Problems with shipborne invaders are part of a much larger issue. What is needed is an integrated approach to all introduced species, including more interagency cooperation, the establishment of screening protocols, the development of rapid response procedures and better treatment methods, and vastly more research. As pointed out by Cangelosi regarding ballast operations, at least nine states have addressed the problem on their own, and their responses are not all consistent with one another. Further, different introduced species transported by different means from different locales often interact to exacerbate their separate impacts. For example, zebra mussels, which probably reached North America in ballast water, interact with Eurasian watermilfoil, an invasive weed that was widely sold as an aquatic ornamental (and probably arrived by that route). Each species facilitates the spread of the other into new habitats. An umbrella organization, a Centers for Disease Control and Prevention-like national center for biological invasions, could integrate management and research throughout federal, state, and local governments.

Scientists and managers have identified the same types of problems and needs seen with invasive aquatic organisms with almost every other type of invading organism that affects North America’s lands, waters, and public health. It is not only inefficient to address the issue piecemeal; costly problems can arise that could have been foreseen and avoided.

DON C. SCHMITZ

Tallahassee, Florida

DANIEL SIMBERLOFF

Nancy Gore Hunger Professor of Environmental Studies

University of Tennessee

Knoxville, Tennessee


Workplace hazards

The Occupational Safety and Health Administration’s (OSHA’s) Hazard Communication Standard (HCS) has been one of the agency’s most successful rulemaking efforts. As noted in “Improving Workplace Hazard Communication” by Elena Fagotto and Archon Fung (Issues, Winter 2003), prior to the HCS being promulgated, it was often difficult for employers to identify chemicals in their workplaces, much less locate any information regarding the effects of these chemicals and appropriate protective measures. This obviously made it nearly impossible for many workers exposed to such chemicals to get any meaningful information about them.

The HCS provided workers the right to know the identities and hazards of the chemicals in their workplaces. In addition, and perhaps more important in terms of worker protections, the HCS gives employers access to such information with the products they purchase. As noted in the article, a worker’s right to know may not necessarily lead to improved workplace safety and health. The benefits related to reducing chemical source illnesses and injuries in the workplace result primarily from employers and employees having the information they need to design appropriate protective programs. These benefits can be achieved by choosing less hazardous chemicals for use in the workplace, designing better control measures, or choosing more effective personal protective equipment.

The HCS is based on interdependent requirements for labels on containers of hazardous chemicals, material safety data sheets (which are documents that include all available information on each chemical), and training of workers to make sure they understand what chemicals are in their workplaces, where they are, and what protective measures are available to them. Although workers should and do have access to safety data sheets, these documents have multiple audiences and are not only a resource for workers. They are used by industrial hygienists, engineers, physicians, occupational health nurses, and other professionals providing services to exposed employees. To serve the needs of these diverse audiences, the safety data sheets must include technical information that may not be needed by workers but may be useful to others.

However, as noted, workers should also have access to container labels as well as training about hazards and protective measures. Comprehensibility assessments should be based on consideration of all the components of the hazard communication system and should address the information that workers need to know to work safely in a workplace. Addressing safety data sheets by themselves without consideration of the other components, or basing such considerations on the premise that all parts of the safety data sheets need to be easily comprehensible to workers, is not sufficient.

OSHA has actively participated in development of a Globally Harmonized System (GHS) to address hazard communication issues. In that process, we encouraged and supported review of information related to comprehensibility as well as consideration of lessons learned in implementing existing systems in this area. We believe the GHS as adopted by the United Nations in December 2002 has great promise for improving protections of workers worldwide and look forward to participating in discussions in the United States regarding whether it should be adopted here.

JOHN L. HENSHAW

Assistant Secretary for Occupational Safety and Health

U.S. Department of Labor


As Elena Fagotto and Archon Fung note, clear disclosure, or even the threat of clear disclosure, does work to reduce risk. But too often the actual disclosure is presented in a foggy manner, with qualifiers and jargon using up all the headline space and the key point–safe, unsafe, or unknown–buried deep at the bottom of the page if it can be found at all.

Is there a cure? If so, it lies in the design of the statute rather than in the regulatory process, where for both regulators and regulated the momentum is toward stylized technical obscurity and away from common sense. One 15-year-old disclosure law, California’s Proposition 65, has been remarkably successful in avoiding the fog problem by requiring that the disclosure itself be in plain terms and making sure that underlying complexities are grappled with somewhere else than in the language the public sees. Its regulators have enormous room to help resolve scientific uncertainties but almost no room to muffle the message.

Incentives are crucial to making this pro-clarity approach work, of course. The carrot built into the California law is that if risk assessment uncertainties are resolved and the risk is below a defined threshold, then no disclosure is required at all. The stick is that certain kinds of uncertainty are frowned on, and some kinds of risks must be disclosed even if uncertainties remain. The result of this arrangement has been a flood of progress in resolving level-of-risk questions and far fewer actual disclosures than anyone in the regulated community had predicted (i.e., much more reduction of risk to below threshold levels).

Perhaps the simplest lesson from this experiment in fog-clearing is that fog itself can be the target of disclosure. Clearly disclosing the fact of fog can be a powerful force in dispersing it. If, during the multidecade debate over the risks of benzene in the workplace, federal law had required employers to simply tell their employees, “we and the government can’t tell you if this particular chemical is safe or not,” employers might have felt a stronger incentive to get answers about that chemical or stop using it.

As Fagotto and Fung point out, the point of disclosure is to stimulate risk-reducing action. Disclosing what you don’t know, in the right context, may be more of a stimulus than disclosing what you do.

DAVID ROE

Workers Rights Program

Lawyers Committee for Human Rights

Oakland, California


Improving hazard communication (Hazcom) in the workplace is a critical element in any occupational safety and health program. I commend Issues for publishing Elena Fagotto and Archon Fung’s article. It identifies current problems with Hazcom systems, especially problems with Material Safety Data Sheets (MSDSs), training, and the promotion of pollution prevention.

MSDSs. Not only can one MSDS vary greatly from another for the same substance, but also they frequently provide inaccurate information (an astounding 89 percent of the time according to researchers cited in the article). Some of these inaccurate MSDSs can actually lead to injury or illness if workers follow their advice. MSDSs are a very important addition to the arsenal of better Hazcom, but, as the authors suggest, when they are inaccurate, incomplete, or difficult to understand, quality Hazcom is impossible.

Training. The authors also discuss Hazcom training, making the important points that “information does not necessarily increase understanding or change behavior” and that training is more than a pamphlet or a video. Too often, employers confuse information with training. I have some concerns with the apparent lauding of third-party training in the article, since third-party training, in and of itself, is not a guarantee of quality. For organized workers, I recommend training by trade unions. Trade unions have a history of effective training, specifically geared toward their members. Hands-on small-group activities led by specially educated peer instructors have shown time and again that if one designs curriculum and uses instructors that relate well to trainees, training can be extremely successful as well as cost-efficient.

Pollution prevention. The authors also discuss a positive movement toward pollution prevention, with incentives to use less hazardous instead of more hazardous substances. Employers increasingly see such substitution as limiting liabilities and sometimes even lowering cost and improving productivity. The authors cite a General Accounting Office study that found one-third of employers switching “to less hazardous chemicals after receiving more detailed information from their suppliers.” Changing work practice and introducing engineering controls are also key elements for a prevention program. Fagotto and Fung report that employers need to go beyond substitution but do not specify what some of those other options might be.

A follow-up article might discuss institutionalized forces against the disclosure of hazard information, since one must change the economic and legal disincentives for full disclosure before being able to truly have good Hazcom. Continual evaluation of MSDS use, Hazcom training, and progress toward pollution prevention are also needed. Another useful follow-up to this article might focus on solutions to the Hazcom problems they have so well described. I hope Fagotto and Fung will continue working and publishing on this important problem.

RUTH RUTTENBERG

George Meany Center for Labor Studies

National Labor College

Silver Spring, Maryland


Improving hazard communication has been a continuous journey that began in the 1940s and is still progressing. Information on chemicals is more readily available in a more uniform format. This provides employers with appropriate information to better manage hazardous chemicals in the workplace and protect employees with controls, personal protective equipment, training, and waste disposal. There are also “greener” products. One-third of employers have substituted less hazardous components. These activities all benefit workers, communities, consumers, and regulators. The facts paint a positive picture of the Hazcom journey, but there is room for improvement. For example, improved worker understanding would translate into behavior changes. But perhaps it is time to recognize and commend the broad use of material safety data sheets and the benefits to all.

The Globally Harmonized System (GHS) is the next stop on the journey. It will harmonize existing hazard communication systems. The GHS will improve hazard communication by allowing information to be more easily compared and by utilizing symbols and standard phrasing to improve awareness and understanding, particularly among workers. By providing detailed and standardized physical and health hazard criteria, the GHS should lead to better quality information. By providing an infrastructure for the establishment of national chemical safety programs, the GHS will promote the sound management of chemicals globally, including in developing countries. Facilitation of international trade in chemicals could also be a GHS benefit.

The GHS is a step in the right direction. Its implementation will require the cooperation of countries, international organizations, and stakeholders from industry and labor.

MICHELE R. SULLIVAN

MRS Associates

Arlington, Virginia


Standardized testing

“Knowing What Students Know” by James Pelligrino (Issues, Winter 2002-03) has provided a sound and sensible structure for a new mapping of assessment to instruction and identification of competencies in young people. But we still need a cognitively based theoretical framework for the content of that knowledge and of the processes that lead to the acquisition and use of that structure. My goal in this brief response is to describe our efforts toward filling in these gaps.

Our work is based on a model of skills that posits three broad skills classes: People need creative skills to generate ideas of their own, analytical skills to evaluate whether the ideas are good ones, and practical skills to implement their ideas and persuade other people of their worth. This CAP (creative-analytical-practical) model can be applied in any subject matter area at any age level. For example, in writing a paper, a student needs to generate ideas for the paper, evaluate which of the ideas are good ones, and find a way of presenting them that is effective and persuasive.

We have done several projects using this model.

  1. We assessed roughly 200 high-school students for CAP skills, teaching them college-level psychology in a way that either generally matched or mismatched their patterns of abilities. We then assessed their achievement for analytical learning (for example, “compare and contrast Freud’s and Beck’s theories of depression”), creative learning (for example, “suggest your own theory of depression, extending what you have learned”), and practical learning (for example, “how might you use what you have learned to help a friend who is depressed?”).

Students were assessed for memory learning, as well as for CAP learning. Students taught at least some of the time in a way that matched their strengths outperformed those who were not.

  1. In another set of studies, we taught over 200 fourth-graders and almost 150 rising eighth-graders social studies or science. They were taught either with the CAP model, a critical thinking-based model, or a memory-based model. Assessment was of memory-based as well as CAP knowledge. We found that students learning via the CAP model outperformed other students, even on memory-based assessments.
  2. In a third study, we taught almost 900 middle-school students and more than 400 high-school students reading across the disciplines. Students in the study were generally poor readers. We found that on memory and CAP assessments, students taught with the CAP model outperformed students taught with a traditional model.
  3. In a fourth study of roughly 1,000 students, we found that the CAP model provided a test of skills that improved the prediction of college freshman grades significantly and substantially beyond what we attained from high-school grade-point averages and SAT scores.
  4. In a fifth ongoing study, we are finding that the CAP model improves teaching and assessment outcomes for advanced placement psychology. So far, we have not found the same to be true for statistics.

In sum, the CAP model may provide one of a number of alternative models to fill in the structure so admirably provided by Pellegrino.

ROBERT J. STERNBERG

PACE Center

Yale University

New Haven, Connecticut


Every law tells a story, and the one No Child Left Behind (NCLB) tells is mixed (“No Child Left Behind” by Susan Sclafani, Issues, Winter 2002-03). We hear that public schools are generally broken and solely culpable for the achievement gap. Yet they are also so potent that some help and big sticks can get each of them to bring 100 percent of their students to the “proficient” level in reading, math, and science–a feat never before accomplished, assuming decent standards, in even world-class education systems.

The story correctly values scientifically based practices. Yet its plot turns on subjecting schools to a formula for making “adequate yearly progress” toward 100 percent proficiency that suspends the laws of individual variability, not to mention statistical and educational credibility. Child poverty has no role, except to cast those who raise it as excuse-makers who harbor the soft bigotry of low expectations.

For disadvantaged children and the prospects of NCLB, the consequences of ignoring poverty’s hard impact are more dire. “In all OECD countries,” the United Nations Children’s Fund’s (UNICEF’s) Innocenti Research Center reports, “educational achievement remains strongly related to the occupations, education and economic status of the student’s parents, though the strength of that relationship varies from country to country.” NCLB is right to demand that our schools do more to mitigate the impact of inequality. America has the most unequally funded schools, the greatest income inequality, and the highest childhood poverty rate among these countries, often by a factor of 2 or 3.

Notwithstanding prevailing U.S. political opinion, UNICEF’s review of the international evidence does not locate the source of the achievement gap in the schools but outside them. It also finds that the educational impact of inequality begins at a very early age. Overcoming the substantial achievement gap that exists even in the school systems that are best at mitigating it, the review concludes, necessitates high-quality early childhood education.

The National Center for Education Statistics’ Early Childhood Longitudinal Study corroborates this view with domestic detail. By the onset of kindergarten, the cognitive scores of U.S. children in the highest socioeconomic status group are, on average, a staggering 60 percent above the scores of children in the lowest socioeconomic status group. Disadvantaged children can indeed learn, and they do so while in our maligned public schools at about the same rate as other children. But while advantaged children are spending the school year in well-resourced schools and reaping academic benefits outside of school, where most of a student’s time is spent, disadvantaged kids who are behind from the start are having the opposite experience. The achievement gap persists.

NCLB correctly seeks to greatly accelerate the progress of disadvantaged children, and some of its measures can help. But there is no evidence that it can get schools to produce a globally unprecedented rate of achievement growth without an assist from a high-quality early childhood system targeted to disadvantaged children and a more encompassing embrace of what research says will work in formal schooling. It is not just the soft bigotry of low expectations that leaves children behind; it is also the hard cynicism of making incredible demands while ignoring what it takes to fulfill them.

BELLA ROSENBERG

Assistant to the President

American Federation of Teachers

Washington, D.C.


Margaret Jorgensen’s thoughtful article on new testing demands raises important issues that we at the state level are confronting right now (“Can the Testing Industry Meet Growing Demand?” Issues, Winter 2003). New York state has a long history of producing its own exams and of releasing the exams to the public after they are administered.

Because our focus is standards-based, the discussion of depth versus breadth of content is one in which we are currently engaged. Actually, our approach has been at the cognitive level: We have been trying to determine what metacognitive skills in the later grades are needed to apply the enabling skills acquired in the earlier grades to the demands of jobs and postsecondary education.

New York state has instituted standard operating procedures for item and test evaluation, which rely on content expertise, statistical analysis, and subsequent intensive psychometric and cognitive reviews. These reviews are, in turn, based on industry standards and the state criteria that derived from the state learning standards.

Jorgensen has raised legitimate concerns about the nation’s capacity to implement the new testing program. In New York, where we have had extensive practical experience dealing with the type of problems Jorgensen discusses, we are confident that we will be able to conduct sound assessments that will inform instruction.

JAMES A. KADAMUS

Deputy Commissioner

New York State Education Department

Albany, New York


Recruiting the best

Although we agree with William Zumeta and Joyce S. Raveling’s characterization of the problems in “Attracting the Best and the Brightest” (Issues, Winter 2003), we differ in interpreting the causal chain and long-term implications of their proposed solutions. The policy story they present is incomplete.

Very bright students have many options. It might be a healthy outcome that increasing numbers of them consider a broad range of careers within business and health care fields in addition to those in the sciences and engineering. Perhaps the former are offering positive incentives that attract student interest.

Conversely, perhaps, the disincentives within science, technology, engineering, and mathematics fields drive able students to consider opportunities elsewhere. Among the disincentives we would cite is the increasing time to professional independence. By extending the apprenticeship period, opportunities are foreshortened in areas such as childbearing and childrearing, earnings commensurate with years of preparation, and positions of greater responsibility and leadership.

Although growth in graduate degree production in the biomedical sciences has tracked federal R&D funding (as the downturn in Ph.D. production reflects the loss of ground, for example, in funding for the physical sciences and engineering), biomedicine hardly presents a model of human resource development that is worthy of emulation. The creation of perpetual postdocs, marginalization of young professionals, low pay and often lack of benefits, and an undue focus on the academic environment as the predominant yardstick of a successful career fail to describe a positive model of human resource investment.

The lessons of growth in the biomedical sciences suggest that increased federal support may be a necessary condition, but it is unlikely to be a sufficient strategy for remedying the human resource issues that the authors identify. There is a need to expand the preparation and emphasis beyond research to include broader participation in the scientific enterprise, in different sectors, and through other professional roles.

There is a policy flaw in the authors’ thinking as well. Despite all of the problems recounted above, students are still drawn to the joy and possibilities of a life in science. They still enter our graduate programs. Whether they ultimately make contributions to knowledge and practice depends as much on the actions of our universities, as both producers and consumers of future scientists and engineers, as on the policies of the federal government.

In the absence of a human resource development policy, the federal agencies have followed a haphazard and uncoordinated approach to nurturing talent and potential. Strengthening the federal signal in the marketplace, which the authors advocate as a “demand-side approach,” is doomed if institutions and their faculties do not respond to the signal. They must produce a science professional not only with the seeming birthright of “best and brightest” but also with the knowledge and skills to be versatile science workers over a 40+-year career. We need to value them for contributing not only to the science and technology enterprise but also to the larger public good.

Without such a response, the federal signal, even if funding grows, will continue to distort perceptions of market opportunities. Human resource development for science and engineering will remain a supply-driven business detached from both national need and student aspiration. Smart students will always recognize when the environment has not changed.

SHIRLEY M. MALCOM

American Association for the Advancement of Science

Washington, D.C.

DARYL E. CHUBIN

National Action Council for Minorities in Engineering

New York, New York


William Zumeta and Joyce S. Raveling approach the issue of attracting the best and brightest to science and engineering (S&E) by viewing the shaping influences on individual choice as a marketplace in which supply and demand are the drivers. They argue that federal policy and R&D priorities have an important influence on both of these elements and therefore on student interest in S&E.

Zumeta and Raveling lay out several conditions that may contribute to the cooling of interest in S&E.

  1. Lengthy periods of training and research apprenticeship that are required before an individual can become an independent researcher.
  2. Modest levels of compensation for individuals whose prospects for higher earnings in other sectors are quite good.
  3. Lack of faculty positions and research positions in academe as well as elsewhere.
  4. Lack of sufficient research support in a number of S&E fields.

To this list I would add the idea that we need to invest in curricula that will interest and encourage students during their lower division experience and attract them to further S&E studies.

We are still largely speculating about what might encourage more students to undertake advanced study and enter the S&E workforce as researchers, as scholar/teachers, and as science, technology, and math teachers.

A 2001 national survey conducted by the National Science Foundation (NSF) and released recently reports that the annual production of S&E doctoral degrees conferred by U.S. universities has fallen to a level not seen since 1994. However, a modest upturn in 2000-01 may foreshadow greater production.

As Zumeta and Raveling demonstrate in their study, our best-performing college students are becoming less likely to pursue careers in S&E. However, the federal workforce and our faculty and teacher corps are aging, and we are approaching a generational transition. We must deal with these problems now.

What are the federal R&D agencies doing about the drop in enrollment and completion in S&E fields? The issue of workforce education and development is a national priority. The National Science and Technology Council has established a subcommittee composed of representatives from several federal agencies to examine this issue. In the meantime, NSF, whose programs I know best, is working on all of the components that are likely to influence career choices, guided by the assumption that no single intervention will work if the larger education and workforce environment is not addressed and if the quality of the K-12 and undergraduate experiences is not enhanced.

We are increasing the stipends for graduate study in our Graduate Research Fellowships; our GK-12 program, which opens up opportunities for graduate students to work in the schools; and our IGERT program (Integrative Graduate Education and Research Traineeships). This increase recognizes the importance of financial support as well as of timely completion of a doctorate.

In addition, we are increasing our support for undergraduates so that they can gain research experience as early as possible in their studies. Our reviews of our Research Experiences for Undergraduates program suggest that early research experiences solidify the goals of students who have not yet committed to advanced study and confirm the ambitions of students who are already serious about going on to advanced degrees. To provide access to research experiences, we support alliances of institutions that can provide opportunities for their faculty and students to participate in research. This is especially important for institutions with significant enrollments of women and minorities.

We are working on expanding research opportunities in all of the fields supported by NSF and are seeking to influence the dynamics described by Zumeta and Raveling. We are making efforts to increase both the size and duration of our awards and are encouraging our investigators to develop meaningful educational strategies that will broaden participation in their research and increase its educational value for all students and for the general public.

Finally, as it prepares to implement the No Child Left Behind Act, the U.S. Department of Education is putting significant resources into the schools in order to ensure that all of our nation’s classrooms have qualified teachers who are knowledgeable and able to inspire and challenge their students. In addition to its collaboration with the U.S. Department of Education in the support of Mathematics and Science Partnerships that are bringing K-12 and higher education together with other partners to develop effective approaches to excellence in math and science education for all students, NSF is investing resources in research on learning in S&E at all levels, as well as supporting the development of new approaches to the preparation and professional development of K-12 teachers.

JUDITH A. RAMALEY

Assistant Director for Education and Human Resources

National Science Foundation

Arlington, Virginia


Air travel safety

There has been and continues to be much concern expressed about the potential for aircraft system disruption caused by portable electronic devices (PEDs) used by passengers on airplanes. This is a difficult issue for a host of reasons, the first one being detection. Since we humans have no inherent way to sense most radio waves, we rely on detecting their effects. It may be the ring of a cell phone, the buzz of a pager, music emanating from a radio, a display on sophisticated measuring equipment, or abnormal behavior of an aircraft system.

A commercial airplane’s safe operation strongly depends on numerous systems that in turn rely on specific radio signals for communication, navigation, and surveillance functions. All electronic devices are prone to emit radio frequency (RF) energy. Consumer electronic devices have RF emission standards that should provide assurance that the devices will not exceed prescribed Federal Communications Commission standards. An open question is whether those standards alone are sufficient. Also, determining the standards that are applicable, ensuring consistent test and measurement methodologies, having confidence that all of the thousands of devices produced are compliant and that they remain so once in the hands of consumers despite being dropped, opened, damaged, etc., are all concerns to be addressed. New technologies such as ultrawideband generate even more concerns, because the technology can intentionally transmit across bands previously reserved for aviation purposes.

At this point in time, no one knows for certain whether PEDs known to be compliant with applicable emission standards will or will not interfere with commercial transport aircraft systems. A greater level of understanding is coming to exist thanks to studies and efforts that have been underway over the past several years. However, more study is needed. The sensitivities of potentially affected aircraft systems should be quantified, the consequences of multiple PEDs in operation on a given aircraft must be determined, and the different characteristics of the multitude of aircraft/system configuration combinations must be assessed.

Not being able to readily detect the presence and strength of unwanted RF signals makes establishing cause and effect very difficult. As stated in “Everyday Threats to Aircraft Safety” by Bill Strauss and M. Granger Morgan (Issues, Winter 2002-03), there have been numerous suspected cases of interference with aircraft systems, but few repeatable, fully documented interference events from PEDs have come to light. The fact that devices being carried aboard aircraft are so portable makes matters even worse. Geometrical relationships and separation distances are critical in the interaction of a PED with an aircraft system. Because of these factors, duplicating the exact set of variables to demonstrate an interference event is quite difficult.

Data available to date suggest that there is potential for interference under certain conditions. The consequences could have safety implications. This invokes the need for policy, and establishing a policy creates the need for its implementation. Implementation, especially enforcement, by airline personnel can be problematic. Among all their other duties to ensure safe and comfortable travel, flight attendants must know and understand the policy, somehow distinguish the “good” from the “bad” devices, know when they can and can’t be operated, know when multifunction devices are operating in an acceptable versus unacceptable mode (such as PDAs with embedded cell phones), confront passengers who are not complying (which, by the way, does not make for good customer relations at a time when the airlines must do everything possible to keep customers coming back), and somehow rapidly inform the pilots, who are behind the equivalent of a bank vault door, when a passenger refuses to abide by the rules.

Onboard RF detection systems could potentially assist with policy enforcement, but much development work must be done to determine what the real threat levels and frequencies are. For example, some cell phones may pose no real safety threat under many operating conditions. The sensitivity of the airplane systems, the dynamic ranges of intentional signals (i.e., worst-case and best-case intentional signal strengths), the attenuation between a PED and an affected aircraft system, and the criticality of an aircraft system during a particular phase of operation are just a few of the many additional considerations needing study and quantification before an onboard detection system can be made reliable and practical.

Because of research programs by organizations such as NASA, Carnegie Mellon, the University of Oklahoma, the U.S. Department of Transportation, and the Federal Aviation Administration, the level of understanding is improving. Yet a lot of ground remains to be covered. The best thing to do in the interim is to adjust airline PED policy as factual and scientific data are developed and to maintain such policy in a clear, consistent, and enforceable form to ensure passenger and crew safety. A slow and sure approach is always best when dealing with potential effects up to and including the loss of human life. Passengers clearly would like to have the ability to use PEDs during all phases of flight to facilitate their ability to work, communicate beyond the airplane, and use their flight time most effectively. Eventually, aircraft will be designed, built, and/or modified with not only the requirement to offer safe passage in mind, but also with this new passenger requirement to use any PED, anywhere, at any time.

KENT HORTON

General Manager-Avionics Engineering

Delta Air Lines

Atlanta, Georgia


Correction:

In the review of Flames in Our Forests: Disaster or Renewal (Issues, Fall 2002, p. 87), the name of one of the authors–Stephen F. Arno–was misspelled.

A Change of Climate

Although the signs of global warming are becoming ever more prominent, casual observers of the media in the United States or Europe might easily conclude that U.S. citizens are in denial about climate change, refusing to take responsibility for controlling their emissions of carbon dioxide (CO2) and the other greenhouse gases (GHGs) that cause global warming. Although it is true that the federal government remains stalemated on how to deal with climate change, the notion that no climate action is taking place in this country is erroneous. The most intriguing story is what has been happening in state legislatures, at city council meetings, and in corporate boardrooms, as well as on college campuses, in community groups, and in a range of other local settings. Across the nation, numerous climate action programs are moving aggressively to reduce emissions of GHGs.

It is rare that a week goes by without the announcement of a new initiative. Among recent clippings, New York Governor George Pataki, a Republican, announced that his state aims to get 25 percent of its electricity from carbon-free renewable energy resources within a decade. Ford and General Motors declared their intent to follow Toyota’s lead and manufacture hybrid electric cars and trucks that are more fuel-efficient and less polluting. New Hampshire adopted emissions controls for three aging power plants. American Electric Power, the largest single source of GHGs in the western world, launched an effort to reduce its emissions by 4 percent by 2006. Students at Zach Elementary School in Ft. Collins, Colorado, choose to purchase wind energy instead of coal power, thus keeping 420,000 pounds of CO2, the leading GHG, out of the atmosphere. How many millions of tons of CO2 have been saved by the activities of states, cities, corporations, and citizens has not yet been calculated, but the number is growing rapidly.

What is the significance of this nascent grassroots movement? In the past, major shifts in societal values have originated at the local level. Popular movements to abolish slavery, allow women to vote, extend civil rights to African Americans, and curb secondhand smoke started small and then spread nationally. The nation now seems to be witnessing a similar snowball effect, where one successful climate action program inspires two or three more. These early efforts are demonstrating that climate protection is possible, affordable, and increasingly viewed as desirable by many political, corporate, and civic leaders. Widespread activities to reduce emissions of GHGs demonstrate that despite the partisan wrangling in Washington, ordinary citizens can begin addressing climate change now. The challenge will be for federal “leaders” to catch up.

Temperatures rising

Although Swedish scientist Svante Arrhenius first suggested in 1896 that CO2 emitted from the burning of fossil fuel would lead to global warming, the issue did not receive sustained political attention until the 1980s. In 1992, the United Nations Framework Convention on Climate Change set a goal of stabilizing atmospheric concentrations of GHGs at a level that would prevent dangerous interference with the climate system. In 1997, the world’s nations gathered in Kyoto, Japan, to negotiate how to accomplish this goal. The resulting agreement–the Kyoto Protocol–has now been signed by 100 nations and, if ratified by Russia, will go into effect later in 2003.

The protocol, which would require the United States to reduce its GHG emissions to a level that is 7 percent below 1990 levels, met a frosty reception in Washington. One senator pronounced it “dead on arrival.” During his presidential campaign, George W. Bush pledged to reduce CO2 emissions, but shortly after taking office reneged on this pledge. All rhetoric aside, it will be nearly impossible to stabilize global CO2 concentrations without the full and active cooperation of the United States. U.S. citizens are 4 percent of the world’s people but produce 25 percent of all GHGs. U.S. emissions are larger than the combined emissions of 150 less developed countries. Texas alone produces more CO2 than the combined emissions of 100 countries, and the utility American Electric Power produces more than Turkey.

Several developments are driving the ground swell in climate action programs. For one thing, scientific understanding of climate change has advanced significantly. In 1992, the National Academy of Sciences cautiously concluded, “Increases in atmospheric GHG concentrations probably will be followed by increases in average atmospheric temperatures.” By 2001, the academy was much more definitive: “Greenhouse gases are accumulating in Earth’s atmosphere as a result of human activities, causing surface air temperatures to rise. Temperatures are, in fact, rising . . . There is general agreement that the observed warming is real and particularly strong within the past 20 years.”

Reports issued by the Intergovernmental Panel on Climate Change, an interdisciplinary group of more than 2,000 scientists, show a similar evolution. In 1990, the panel stated that the “unequivocal detection of the enhanced greenhouse effect from observations is not likely for a decade or more.” In 1995, it said that “the balance of evidence suggests a discernible human influence on global climate.” In 2001, the panel concluded that “there is new and stronger evidence that most of the warming observed over the last 50 years is a attributable to human activities.”

Another factor fueling the growth of climate action programs is that climate change is becoming evident, even to lay people. New England gardeners notice that spring arrives about two weeks earlier than it used to, Inuit hunters confirm the rapid melting of Arctic sea ice, and rangers in Glacier National Park document rapidly vanishing ice fields. According to the National Oceanic and Atmospheric Administration, the 10 warmest years in the historical record have occurred since 1980; 1998 was the warmest year and 2002 was the second warmest. It has now been 17 years since the world has experienced a cooler-than-normal month. This sort of unending heat wave has not gone unnoticed.

The human role in climate change is no longer a controversial theory to be debated on talk radio; increasingly, the public views it as a fact. And surveys show that people are concerned. For example, a recent poll revealed that 75 percent of registered voters (including 65 percent of Republican voters) believe that doing nothing about global warming is “irresponsible and shortsighted.” The business community’s perception of climate change also has changed. In the face of the accumulating body of scientific evidence, denying the problem is no longer a credible corporate strategy. Many powerful corporations that once lobbied against climate action have performed an about-face. Ford, for example, recently ran an ad that read: “Global Warming. There, we said it.” In public policy as in corporate affairs, once a problem is acknowledged, the discussion turns to possible solutions. It is against this evolving scientific and political backdrop that politicians, corporate executives, and citizens are beginning to act.

States lead the charge

Many states are large emitters of GHGs. For example, 30 states emit more CO2 than Denmark, 10 states emit more than the Netherlands, and Texas and California together emit more than all the nations of Africa combined. Efforts by states to reduce their emissions thus have global ramifications. And states have important regulatory power over many activities that are relevant to the issue of GHGs.

Some of the most significant activity has occurred in California. In 2001, the legislature passed an $800 million energy conservation bill aimed at reducing the state’s electricity use by 10 percent. Although primarily intended to address the state’s electricity crisis, the law also will lead to strong reductions in GHG emissions. In 2002, the legislature took aim at motor vehicles, which account for 40 percent of the state’s CO2 emissions, directing the California Air Resources Board to develop a plan for the “maximum feasible reduction” in CO2 emissions. Since burning a gallon of gasoline produces 20 pounds of CO2, the obvious way to reduce emissions is to improve fuel efficiency. Today, a typical car produces nearly 12,000 pounds of CO2 each year–roughly one pound per mile driven. Sport utility vehicles and light trucks pollute more. Noting that federal fuel efficiency standards have barely budged in two decades, California’s governor, Gray Davis, said, “I would prefer to have Washington take the lead, but in the absence of that we have no choice but to do our part.” The auto industry has objected that the proposed changes, to take effect in 2009, cannot be accomplished and would not be acceptable to consumers. (Automakers raised similar objections to previous fuel economy targets, emissions limits, seatbelts, and other advances). However, because 10 percent of cars sold in the United States are purchased in California, the state’s law (if it survives legal challenges) may become a de facto national standard, since automakers are unlikely to build a separate line of cars solely for that market.

The notion that cutting CO2emissions will devastate the U.S. economy is not borne out by experience.

Electric utilities produce 38 percent of the nation’s GHGs and are an obvious target for reductions. New Hampshire passed a precedent-setting bill that requires Public Service Company of New Hampshire, the state’s largest utility, to reduce CO2 emissions to 1990 levels by 2007. The bill was supported by a bipartisan coalition that included environmental groups and the utility itself. “We knew that there would be new legislation,” said company spokesperson Martin Murray, “and we also knew that if we were involved in developing it, it would be more likely to emerge in a form we could support; collaboration achieves better results than fighting.”

Oregon and Massachusetts also have passed laws requiring cuts in CO2 emissions from power plants. A 1997 Oregon law required new power plants to emit 17 percent less CO2 than existing ones. Developers can offset plant emissions by contributing to energy conservation efforts, developing renewable energy projects, planting trees, or using the plant’s waste heat in nearby buildings. Generators that violate the standard are allowed to purchase credits from those who reduce emissions more than required. In Massachusetts, the six power plants in the state that produce the most CO2 are now required to reduce their emissions by 10 percent by 2006-2008. Plants that fail to meet the deadline must purchase emissions credits.

New Jersey has committed to reduce GHG emissions by 2005 to a level that is 3.5 percent below 1990 levels. Under the state’s comprehensive plan, one-third of the reductions will come from efficiency improvements in buildings, one-third from greater use of clean energy technologies, and one-third from improvements in transportation efficiency, waste management, and resource conservation. New York is providing $25 million in tax credits to building owners and tenants who increase energy efficiency. Maryland is waiving its sales tax on efficient refrigerators, room air conditioners, and clothes washers. In Oregon, appliances that are 25 percent more efficient than federal standards qualify for a tax credit.

States also are addressing climate change by promoting carbon-free renewable energy sources, such as wind and solar power. In 1999, George W. Bush, then governor of Texas, signed a bill requiring the state’s electricity providers to develop 2,000 megawatts of renewable capacity by 2009–and this goal has already been achieved. Under this renewable portfolio standard (RPS), energy providers can develop the capacity themselves or purchase credits from solar, wind, hydro, biomass, and landfill gas projects. A surge in wind power development was spurred by the synergistic effect of the RPS and a federal tax credit for wind energy production. (The federal credit is set to expire at the end of 2003, unless extended by Congress.) Maine, California, Wisconsin, Arizona, Minnesota, Iowa, Connecticut, Nevada, New Jersey, New Mexico, Pennsylvania, and Massachusetts also have adopted RPSs. These programs will collectively produce enough carbon-free electricity to power 7.5 million homes, according to calculations by the Union of Concerned Scientists. This is the equivalent of taking 5.3 million cars off the road or planting 1.6 billion trees. The annual CO2 savings equal about one-half of 1 percent of the nation’s total emissions.

Some states are enhancing their impact by banding together to address climate change. Six New England governors joined premiers of eastern Canadian provinces in pledging to lower, by the year 2020, greenhouse emissions to a level that is 10 percent below 1990 levels. The pact calls for reducing electricity emissions by using more clean-burning natural gas, increasing renewable energy sources, and promoting energy efficiency. Signed in 2001 by three Republican governors, two Democrats, and an Independent, the pact demonstrates strong bipartisan support for curbing global warming. “This agreement sends a powerful message to the rest of the nation about the importance of working cooperatively to cut pollution,” said Jeanne Shaheen, then the governor of New Hampshire. “If we’re going to be successful, it means not just working on it in New Hampshire.”

The attorneys general of seven states, New York, Massachusetts, Maine, New Jersey, Rhode Island, Washington, and Connecticut, recently notified the U.S. Environmental Protection Agency of their intent to sue the agency for failing to regulate CO2 emissions under the federal Clear Air Act. The attorneys general of 11 states wrote to urge President Bush to cap power plant CO2 emissions and increase automobile fuel efficiency. The chief legal officers of Massachusetts, Alaska, Maine, New Hampshire, Rhode Island, Vermont, California, New York, Connecticut, New Jersey, and Maryland wrote, “Far from proposing solutions to the climate change problem, the administration has been adopting energy policies that would actually increase greenhouse gas emissions.” The authors urged the president to “adopt a comprehensive policy that would protect both our citizens and our economy.”

This coin has another side, however. A number of states, including Wyoming, West Virginia, Pennsylvania, North Dakota, Colorado, and Alabama, have passed resolutions barring state action to reduce GHG emissions or urging Congress to reject the Kyoto Protocol, or both. It is probably no coincidence that these states are among the nation’s largest coal producers. In states where coal provides the bulk of the electricity, a family’s $100 electric bill represents the mining of 1,400 pounds of coal, whose burning creates nearly 3,000 pounds of CO2, most of which will still be in the atmosphere a century from now. But in a sign of the times, some of these same states are now developing climate action plans.

Cities at work

More than 100 cities already have pledged to cut their GHG emissions. For example, the San Francisco Board of Supervisors in early 2002 unanimously passed Mayor Willie Brown’s bold resolution to cut the city’s emissions over the next 10 years to a level that is 20 percent below 1990 levels (a 13 percent greater reduction than would have been required under the Kyoto Protocol). “When Washington isn’t providing leadership, it’s critical for local governments to step in,” Brown said, adding that the goal “is as much about protecting our national security as it is about protecting our quality of life.”

Since city governments own buildings, operate motor vehicle fleets, and regulate such things as utility rates, energy codes, mass transit, highway construction, outdoor lighting codes, waste management, land use, and other activities that have large climate effects, there are many policies they can adopt to reduce GHG emissions. A brief sampling of measures that have been incorporated into climate action plans includes the integration of transportation and land use policies in Portland, Oregon; altering the commuting behavior of municipal employees in Los Angeles; and purchasing hybrid electric vehicles for municipal fleets in Denver. Aspen, Colorado, now levies the world’s highest carbon tax on profligate energy use in high-end homes, raising $1.9 million that has been used to install solar hot water systems, buy wind power, fund rebates for energy-efficient appliances, and retrofit public buildings.

Providing a great deal more federal funding for the development of tomorrow’s clean energy technologies is crucial.

The International Council for Local Environmental Initiatives offers guidance to cities through its Cities for Climate Protection campaign, in which municipalities commit to inventory their GHG emissions, set a target for future reductions, develop a local action plan, and verify its results. More than 500 cities worldwide (including 125 U.S. cities), representing 8 percent of global GHG emissions, are participating in the program. Cities have found dozens of ways to reduce or offset emissions, including tree planting, mass transit, renewable energy, lighting retrofits, mechanical upgrades of public buildings, installing light-emitting-diode bulbs in stoplights, stronger energy codes for new buildings, carpooling, and bike lanes.

Complementing public actions, individuals and private organizations are getting into the CO2 reduction act. Students at the University of Colorado increased their student fees to purchase the entire output of a large wind turbine, thus saving 2,000 tons of CO2. In Pennsylvania, 25 colleges are purchasing wind power. A religious group called Episcopal Power and Light is recruiting churches on the East Coast and in the San Francisco Bay area to buy wind energy. Families have an important role to play. The typical U.S. household produces more than 43,000 pounds of CO2 per year, or 120 pounds per day. Half of these emissions come from heating, cooling, and operating the family home, while half come from driving cars. Not only are many families cutting back on use of fossil fuels, they are taking other steps as well. The federal Office of Energy Efficiency and Renewable Energy estimates that nationwide about 400,000 households are buying carbon-free electricity from their utility companies. In Colorado, 26,000 families and hundreds of businesses are participating in a “green pricing” program that has helped fund two $30-million wind farms. This program, which has counterparts in many states, keeps 180,000 tons of CO2 out of the air each year. By spending $5 per month on wind power, a Colorado family can save 4,800 pounds of CO2 each year–an 11 percent reduction in its climate impact for less than 20 cents per day. Driving a more efficient car, weatherizing their home, and installing compact fluorescent lights in place of incandescents can double these savings.

Corporate clout

A growing list of prominent corporations, including automakers, oil companies, and electric utilities, have voluntarily committed to reducing their GHG emissions. By their public pronouncements, these corporations seem to have concluded that climate change can no longer be ignored and that responsible companies must engage the problem. Among Fortune 500 companies, there is an increasing belief that it is only a matter of time before GHGs are regulated, so beginning now to reduce emissions and factor climate change into long-range planning is a smart strategy. Some corporations have concluded that climate action presents an attractive business opportunity. For others, including electric utilities, the uncertainties of future climate policy cast a huge shadow over investment decisions, including whether to build new coal plants or retrofit aging ones. This risk of uncertainty, of not knowing what federal regulators may ultimately require, has begun to seem more financially hazardous than does resolving the matter.

The notion that cutting CO2 emissions will devastate the U.S. economy is not borne out by experience. As manufacturers evaluate their energy use, they are discovering that many reductions are profitable and thus enhance their competitive position. For example, IBM reduced its total energy use by almost 7 percent in 2001, saving $22.6 million and 220,100 tons of CO2 emissions. Corporations also are discovering that they can increase productivity while simultaneously reducing emissions, further challenging the belief that economic growth and CO2 reductions are incompatible. DuPont has reduced its GHG emissions to 63 percent below 1990 levels (primarily by reducing nitrous oxide emissions and other byproducts of fluorocarbon manufacture) and has held energy consumption flat since 1990, despite a 36 percent increase in company output. The company views its climate change activities as a way to prepare “for the market place of 20 to 50 years from now–which will demand less emissions and a markedly smaller ‘environmental footprint’ from human activity.”

Among other corporate actions, Alcoa has pledged to reduce GHG emissions by the year 2010 to a level that is 25 percent less than 1990 levels. Dow has committed to reduce energy use per pound of product by 20 percent. In 1997, BP was the first major oil company to declare that action to reduce climate change was justified. The company, which supplies approximately 3 percent of the world’s oil, pledged a 10 percent reduction in its own emissions (not those produced by the fuels it sells), and reached that goal in 2002, eight years ahead of target. By using less fuel to produce its products and by burning off (“flaring”) less natural gas at oil wells, the company saved an estimated $650 million. According to the company’s chief executive, John Browne: “People expect successful companies to take on challenges, to apply skills and technology and to give them better choices. Well, we are ready to do our part–to reinvent the energy business, to stabilize our emissions–and, in doing so, to make a contribution to the challenge facing the world.” BP is betting that, in the long term, its solar subsidiary will profit from exponential growth in photovoltaics, a market that is doubling every three years. The idea that climate change represents a new business opportunity also is taking hold among automakers. The commercial success of hybrid electric cars from Toyota and Honda has pushed Ford, Daimler-Chrysler, and General Motors to announce that this fuel-saving option will soon be available in their vehicles.

Nongovernmental organizations are helping corporations address the climate challenge. The Pew Center on Global Climate Change (with 38 companies on its Business Environmental Leadership Council) and Environmental Defense’s Partnership for Climate Action help corporations identify cost-effective strategies for reducing their GHG emissions. Participating companies share lessons they have learned in order to piggyback on each other’s success. Most companies begin by reducing their lighting loads and upgrading their factories’ heating, cooling, and pumping equipment. Some of the resulting savings are then often spent to buy clean power, further reducing emissions. Prominent companies buying wind energy include Kinko’s, Lowe’s Home Warehouse, Advanced Micro Devices, Patagonia, and Toyota.

The road ahead

A skeptic might fairly point out that CO2 emissions in the United States are still rising, and that by 2010 emissions are likely to be about 25 percent higher than they were in 1990. Two important reasons for this rise are immigration and lifestyle choices. The nation has added more than 30 million people and 25 million motor vehicles since 1990, roughly equivalent to grafting on another California. At the same time, consumers are using 10 percent more energy per capita than two decades ago as people drive more and choose larger homes and automobiles. A typical U.S. citizen now produces about a million pounds of CO2 in his or her lifetime.

Against this picture, is it really possible to forge at the grassroots level a climate action plan that will be sufficient to the challenge? Probably not. To achieve the goal of stabilizing GHG concentrations in the atmosphere, emissions will need to eventually fall to nearly zero. It is difficult to see how this can occur without federal action. In this light, the news is mixed. Most of the federal government seems at loggerheads over issues related to global warming, and the Bush administration remains firm in its opposition to the Kyoto Protocol. However, a number of federal agencies are quietly conducting voluntary programs to reduce CO2 emissions. In addition, Sen. John McCain (R-Ariz.) and Sen. Joseph Lieberman (D-Conn.) recently introduced a bill to cap CO2 emissions and launch a market for economy-wide trading in them. This type of system has been successful in reducing sulfur dioxide emissions. The cap would be adjusted over time as needed to achieve climate goals, and large polluters would be required to purchase emission allowances in a CO2 marketplace. It also has been suggested that the federal government should place a tax on CO2 emissions. With either a cap-and-trade system or a tax, putting CO2 into the atmosphere would no longer be free, something economists say is critical to addressing the climate challenge in an economically efficient manner.

States may play an important catalytic role in promoting national action. “If several large states, such as California, New York, and Pennsylvania, were all to pass similar legislation, it might be possible to actually begin to develop a national carbon emissions trading regime before any formal action is taken at the federal level,” according to Granger Morgan of Carnegie Mellon University. This is roughly how the trading of nitrogen oxides among the states of the Northeast developed. But unlike with nitrogen oxides, states would not have to be located next to each other for CO2 trading to make sense, because CO2 mixes globally, and a ton saved anywhere has value anywhere else.

States may play an important catalytic role in promoting national action.

Given the clear need for a national solution, are states and cities in danger of overreaching as they begin to regulate emissions? Again, probably not. Congress recently rejected proposals to adopt a national RPS and to set stricter federal automotive fuel efficiency standards. Therefore, states are doing the right thing to push the debate on these issues. In addition, these programs provide a laboratory for learning what approaches work best, so that as the programs expand, eventually to the national level, there will be a variety of lessons to draw on in structuring the most workable and cost-effective strategy.

People working to reduce emissions around the country recognize that state efforts are no panacea, and they would eagerly applaud a more active federal role. As the group of attorneys general wrote to President Bush in 2002: “State-by-state action is not our preferred option . . . It may increase the uncertainty facing the business community, thus potentially making the most cost-effective solutions more difficult.” They also pointed to a recent Department of Energy report that concluded that the United States “could address carbon dioxide emissions issues with minimal disruption of energy supply and at modest cost, but only with fully integrated planning. Such integrated planning would be best promoted by the regulatory certainty that would result from comprehensive regulatory action at the national level.” Such statements illustrate that by failing to provide leadership, the federal government is instigating a proliferation of varying state standards, on everything from cars to utility regulation, that will be more difficult for businesses and more expensive for consumers.

An economy-wide cap-and-trade system or CO2 tax would result in wide-ranging market-driven changes that would supplant the need for the many other federal emissions reduction programs. But short of such a comprehensive strategy, there is still a great deal that Washington could and should be doing. The Bush administration currently favors a voluntary approach that recognizes corporations that offer to meet certain reduction goals. But the scale of the climate challenge ordains that a voluntary approach will not suffice. To gradually but thoroughly reengineer the nation’s energy systems to be free of CO2 emissions, new energy technologies will be required. R&D is urgently needed for advanced vehicles, less expensive and more efficient photovoltaic cells, advanced biofuels, a hydrogen infrastructure, methods to capture and sequester carbon dioxide, and other vital technologies.

Providing a great deal more federal funding for the development of tomorrow’s clean energy technologies is thus crucial. Maintaining or expanding federal support for today’s renewable energy resources, such as the production tax credit for wind power, is also imperative. Federal economic encouragement will have synergistic effects with state programs as well, in getting new technologies into the marketplace and increasing their volume enough for economies of scale to drive down their costs. In addition, setting more aggressive federal efficiency standards for energy-consuming equipment from air conditioners to automobiles would help (as opposed to the recent rollback of air conditioner standards and the miniscule suggested increase in fuel economy for automobiles). And it is essential that Washington reengage in the evolving international response to climate change.

To say that Washington should do more does not mean that the surging tide of subfederal activities is not moving the political debate. These activities are demonstrating that there is a political appetite for carbon reductions, that such reductions are often profitable (though as reductions proceed, their cost is expected to rise but still be affordable), and that many climate initiatives have numerous economic and environmental benefits. The Bush administration rejects mandatory GHG reductions on the grounds that they would harm the nation’s economy, yet many states taking climate action are doing so partly because it benefits their economies and leads to greater energy independence. Improving energy efficiency that improves the bottom line and developing renewable energy sources that reduce costs, pollution, and dependence on foreign oil are just the kinds of steps that the federal government could be taking to address both economic and security concerns at the national level.

Thus, many U.S. citizens are, indeed, taking responsibility for climate change–and are demonstrating in countless ways their willingness to invest in solutions. Although the scale of the challenge is daunting, eliminating a billion tons of CO2 begins with the first ton. Each of the activities at the grassroots level reduces emissions, provides lessons about how to reduce them further, and perhaps most important, brings pressure to bear on the federal government to initiate the comprehensive strategy that is urgently needed. How long will it take for Washington to feel the heat?

From the Hill – Spring 2003

Nondefense R&D would take a hit in proposed FY 2004 budget

On February 3, President Bush proposed a 4.2 percent increase in federal R&D for fiscal year (FY) 2004. However, most of this increase would be devoted to defense development and homeland security research. Nondefense R&D budgets would receive modest increases, flat funding, or cuts. Indeed, if the modest increase in the budget for the National Institutes of Health (NIH) is excluded, total nondefense R&D spending would decline by 0.1 percent. If only basic and applied research are included in the calculation, non-NIH nondefense R&D would decline by 3.4 percent.

Most of the 4.4 percent increase (to $122.5 billion) in total federal R&D would go to the development of weapons systems. Total research (basic and applied) would rise by 1.5 percent to $53.7 billion. Nondefense R&D including NIH would increase by 1.2 percent to $55 billion.

NIH and the National Science Foundation (NSF) may both have to adjust to lower rates of funding increases after years of favored treatment. After five years in which its budget grew by almost 15 percent a year, resulting in a doubling of the budget, the NIH budget would rise by only 2.7 percent in FY 2004 to $27.9 billion. However, NIH research would increase by 7 percent, because money from facilities would be shifted to research. And although President Bush signed an NSF authorization bill in December 2002 that would double its budget over five years, the proposed $5.5-billion NSF budget would fall far short of the $6.4 billion that has been authorized. The proposed 3.2 percent NSF increase and the 10 percent increase approved for FY 2003 obviously differ sharply from the nearly 15 percent increases for both years that were envisioned in the authorization.

The newly created Department of Homeland Security (DHS) would become a major R&D funding source in FY 2004 with a budget of $1 billion.

Congress is getting a late start on the FY 2004 budget because it didn’t complete most of its FY 2003 appropriations work until February 13, nearly four months into the new fiscal year. Eleven unfinished appropriations bills were wrapped into one omnibus bill that was signed into law by President Bush on February 20.

Federal R&D spending fared well in the final FY 2003 tally. In contrast to the cuts proposed for most agencies by the president, all of the major R&D funding agencies will receive at least modest increases. Total FY 2003 federal spending on R&D will increase 13.8 percent, and basic and applied research will rise 9.7 percent. Funding for nondefense, non-NIH spending, however, continues to stagnate.

Here are department and agency breakdowns of the president’s budget proposals for FY 2004:

  • DOD would see its R&D budget grow to $62.8 billion, up $4.2 billion or 7.1 percent, with all of the increase going to the development of weapons systems. The new increases would come in the wake of record increases of $8.8 billion and $6.7 billion the previous two years. The big winner would be missile defense, a high Bush administration priority. Missile defense development would jump 22 percent to $8.3 billion. Funding for other big development projects would also climb, including a $4.4-billion request for the Joint Strike Fighter (up 28 percent). By contrast, basic research would fall 7.7 percent to $1.3 billion and applied research would decline 14.4 percent to $3.7 billion. The Defense Advanced Research Projects Agency (DARPA) would see its R&D funding increase to $3 billion, up 9.8 percent.
  • Most ($801 million) of the DHS R&D budget would be overseen by the new Directorate of Science and Technology, which includes the new Homeland Security Advanced Research Projects Agency. Some of the directorate’s funding has been transferred from other agencies or departments. The rest of DHS’s R&D portfolio consists of programs in the Transportation Security Administration and the Coast Guard, which were transferred from the Department of Transportation.
  • The big winner within NIH would once again be the National Institute of Allergy and Infectious Diseases, which would receive a boost of 17 percent to $4.3 billion as NIH’s lead institute for its $1.6-billion bioterrorism R&D portfolio. Most NIH institutes would receive increases between 3 and 5 percent. Buildings and facilities funding would fall to $80 million from $629 million in FY 2003, when one-time funding was approved for construction that included biodefense research laboratories.
  • Excluding its non-R&D education activities, NSF R&D would total $4 billion, a 2.8 percent boost. Some of its research directorates would see declining or flat funding. The biological sciences directorate would see its budget fall 1.6 percent to $562 million, while the geosciences budget would inch up 0.5 percent to $688 million. The physical sciences would be emphasized, with the directorate of mathematical and physical sciences receiving a 2.6 percent boost to $1.1 billion. The major research equipment and facilities construction account would increase from $149 million to $202 million, the major beneficiaries of which would be the Atacama Large Millimeter Array, the IceCube Neutrino Observatory, and EarthScope.
  • The National Aeronautics and Space Administration’s (NASA) proposed $15.5-billion budget was finalized before the Columbia shuttle disaster It may now be revamped. NASA’s R&D (two-thirds of the agency’s budget) would grow just 0.2 percent to $11 billion, because non-R&D programs, of which the largest is the Space Shuttle, would have a higher priority. Space science R&D would rise 12.7 percent to $4 billion, including a 27 percent boost for solar system exploration. These funds would be used for developing new nuclear propulsion systems and missions to Mercury, the asteroids, a comet, Pluto, and the Kuiper Belt. Biological sciences research would increase from $312 million to $359 million. The International Space Station would receive $1.7 billion, down from $1.8 billion, but the construction schedule of the station is now in doubt. All shuttle flights in 2003 except for that of Columbia were planned for station construction.
  • Department of Energy (DOE) R&D funding would increase 4 percent to $8.5 billion, but the entire increase would go to DOE’s defense activities. On the nondefense side, funding for the Office of Science would remain essentially flat at $3.3 billion for the fourth year in a row, affecting programs in high-energy physics, nuclear physics, fusion research, and advanced computing. There would be a large boost in nanoscale science, offset by a planned drop in the construction costs of the Spallation Neutron Source. Overall energy R&D funding would remain flat, but there would be significant shifts based on administration priorities. The recently announced FreedomCAR and Freedom Fuel programs to develop next-generation efficient automobiles would receive $1.5 billion over five years, including $272 million in FY 2004, of which $68 million would be new funding. As a result, hydrogen R&D and work on fuel cells and vehicle technologies would increase. Coal R&D (including new spending on carbon sequestration research) and nuclear energy R&D would also increase, balanced by cuts in noncoal fossil fuels R&D and energy conservation R&D. DOE’s defense R&D programs would jump 8.6 percent to $4.2 billion, including substantial increases in inertial confinement fusion and advanced scientific computing as well as DOE’s core stockpile stewardship program.
  • R&D in the Department of Agriculture would fall by 10.3 percent to $1.9 billion. The FY 2003 total was a higher $2.1 billion, because it included hundreds of congressionally designated projects that would be eliminated. The National Research Initiative of competitively awarded grants would receive $200 million, up from $166 million. The mostly earmarked Special Research Grants would be held to $23 million, down from $112 million. Intramural research would decline 5.6 percent to $1 billion because of the deletion of earmarks.
  • The proposed budget again proposes to eliminate two programs in the Department of Commerce’s National Institute of Standards and Technology (NIST): the Advanced Technology Program (ATP), funded at $179 million in FY 2003, and the Manufacturing Extension Partnership, funded at $106 million in FY 2003. Intramural R&D in NIST laboratories would increase 7.3 percent to $330 million. Overall, the NIST R&D budget would fall 22.1 percent to $410 million. R&D in the National Oceanic and Atmospheric Administration would decline 1.4 percent to $675 million.
  • R&D in the Department of the Interior would rise 1 percent to $633 million, but there would be a 4.2 percent cut to $545 million for its lead science agency, the U.S. Geological Survey.
  • The Environmental Protection Agency (EPA) R&D budget would fall 5.7 percent to $607 million. (The FY 2003 total is inflated with one-time building decontamination research funding in response to the anthrax attacks of fall 2001.) There would be flat funding or small cuts for most R&D programs for the second year in a row. The total EPA budget would decline to $7.6 billion, down from $8.1 billion in FY 2003.
  • Department of Transportation R&D funding would fall 1.2 percent to $693 million. But DOT is somewhat cushioned from tight budgetary times because most of its programs are funded from transportation trust funds.

R&D in the FY 2004 Budget by Agency
(budget authority in millions of dollars)

  FY 2002
Actual 
FY 2003
Estimate1
FY 2004
Budget 
Change FY 03-04
Amount Percent
Total R&D (Conduct and Facilities)
Defense (military) 49,877 58,646 62,821 4,175 7.10%
S&T (6.1-6.3 + medical) 10,337 11,232 10,297 -935 -8.30%
All Other DOD R&D 39,539 47,415 52,524 5,109 10.80%
Health and Human Services 24,016 27,550 28,203 653 2.40%
Nat’l Institutes of Health 22,714 26,245 26,946 700 2.70%
NASA 10,224 10,999 11,025 26 0.20%
Energy 8,078 8,205 8,535 330 4.00%
NNSA and other defense 3,761 3,849 4,180 330 8.60%
Office of Science 3,074 3,075 3,066 -9 -0.30%
Energy programs 1,244 1,281 1,289 8 0.60%
Nat’l Science Foundation 3,525 3,927 4,035 109 2.80%
Agriculture 2,112 2,166 1,943 -223 -10.30%
Commerce 1,227 1,248 1,100 -148 -11.90%
NOAA 677 684 675 -9 -1.40%
NIST 503 527 410 -117 -22.10%
Interior 623 627 633 6 1.00%
Transportation 778 702 693 -9 -1.20%
Environ. Protection Agency 592 643 607 -37 -5.70%
Veterans Affairs 756 800 822 22 2.80%
Education 265 315 275 -40 -12.80%
Homeland Security * 266 669 1,001 332 49.60%
All Other 760 798 792 -6 -0.70%

Total R&D 103,100 117,297 122,485 5,189 4.40%
Defense R&D 53,731 62,986 67,515 4,530 7.20%
Nondefense R&D 49,368 54,311 54,970 659 1.20%
Nondefense R&D excluding NIH 26,654 28,066 28,024 -42 -0.10%
Basic Research 23,848 26,048 26,861 813 3.10%
Applied Research 24,407 26,878 26,870 -8 0.00%
Development 49,412 58,599 64,284 5,684 9.70%
R&D Facilities and Equipment 5,432 5,772 4,471 -1,301 -22.50%

Source: AAAS, based on OMB data for R&D for FY 2004, agency budget justifications, and information from agency budget offices.

DHS figures for all years adjusted to include programs to be transferred to DHS from other agencies.

1FY 2003 figures revised to reflect AAAS estimates of final FY 2003 appropriations.

Congress debates president’s hydrogen initiative

President Bush’s proposal to fund R&D to advance the development of hydrogen fuel cells was met with a mixture of excitement and skepticism at a recent hearing of the House Science Committee. The president made the proposal during his state of the union address on January 28.

The proposal is designed to complement last year’s FreedomCAR initiative by supporting the development of infrastructure needed to produce, store, and distribute hydrogen for use as a possible replacement for gasoline engines in cars and trucks. The president has requested $1.5 billion over five years for the new initiative and for FreedomCar, including $272 million in R&D funding for fiscal 2004, $68 million of which represents new money.

“In this century,” the president said, “the greatest environmental progress will come about not through endless lawsuits or command-and-control regulations, but through technology and innovation.”

Hydrogen fuel cells generate electricity by combining hydrogen and oxygen gases in such a way that water is the only waste product. A hydrogen-powered car would thus be very clean and produce no smog-causing pollutants or greenhouse gases. The use of hydrogen could help reduce the nation’s oil consumption and dependence on foreign oil.

A shift to a so-called hydrogen economy would not, however, automatically reduce total U.S. energy use or greenhouse emissions, because the production of hydrogen fuel would most likely require the use of fossil fuels such as natural gas. However, because hydrogen can be produced from many different sources, fuel cell vehicles could rely at least to some extent on environmentally friendly technologies such as wind or solar power and carbon sequestration.

Currently, hydrogen is widely used in the chemical and oil industries, where it is produced primarily from natural gas. Demonstration vehicles that run on hydrogen fuel cells have also been built. General Motors has announced that it will put several fuel cell vehicles in use in the Washington, D.C., area in May 2003, and by October 2003, Shell has said it will equip one of its Washington-area gas stations with a hydrogen pump to support these vehicles. GM has said that it expects to sell a hydrogen-powered car by 2010.

Technology is also available for the storage of hydrogen either as a compressed gas or as a very cold (-253(infinity) Celsius) liquid, and high-pressure hydrogen pipelines are in use as well. However, the challenges of implementing these infrastructure technologies on the scale necessary for wide use in automobiles are daunting, as some members of the Science Committee pointed out at the March 5 hearing.

“The magic of the marketplace alone is not going to create a hydrogen economy, at least not anytime soon,” said Rep. Sherwood L. Boehlert (R-N.Y,), the Science Committee chairman. “In addition to the huge technical hurdles, switching to hydrogen may entail enormous costs….Hydrogen will be a cost-competitive fuel in the coming decades only if one takes into account the social costs of current fuels, such as the pollution they generate and the dependence on foreign oil they promote.”

Moreover, committee members questioned the administration’s decision to request cuts in other energy R&D programs. “Much of next year’s proposed [hydrogen initiative] funding comes from cutting other renewable energy R&D programs,” Boehlert said. “That’s not acceptable.”

But David K. Garman, the Department of Energy’s assistant secretary for energy efficiency and renewable energy, disputed this charge, describing the department’s other programs as “reasonably flat.”

Larry Burns, General Motors’ vice president for R&D and planning, said that focusing on hydrogen would not sacrifice other efforts, such as higher-efficiency hybrid and gasoline-powered vehicles. He said that GM is sponsoring near-term fuel-efficiency initiatives, as well as its long-term hydrogen initiative. Hybrids, he said, are an important “intermediate” technology.

Although many environmentalists welcome the advent of fuel cell technology and the potential to produce hydrogen using renewable energy sources, others see the Bush proposal as an attempt to reduce pressure on the automotive industry to embrace the use of hybrid electric technology. Hybrid cars are already on the market and offer substantial fuel-efficiency improvements. Although the administration anticipates that automakers will produce cost-competitive hydrogen-powered cars by 2010, critics argue that it will take much longer.

The Science Committee plans to address these issues as part of a comprehensive energy reform package that the House will likely take up in April.

NASA programs face intense scrutiny after Columbia loss

The loss of the space shuttle Columbia on February 1, 2003, has once again raised many questions about the operations and programs of the National Aeronautics and Space Administration (NASA). As investigations into what caused the shuttle disaster continue, Congress is examining anew the role of manned and unmanned space flight, the costs and benefits of continuing the shuttle program, the future of the International Space Station, and the effectiveness of NASA management.

At a joint February 12 hearing of the Senate Commerce, Science, and Transportation and the House Science Committees, NASA administrator Sean O’Keefe testified that by 10:30 a.m. the day of the accident–just one and a half hours after the loss of communication with the shuttle–NASA had already taken steps to activate the Columbia Accident Investigation Board, which will be headed by retired U.S. Navy Admiral Hal Gehman.

Although congressional leaders have commended NASA for the speed and openness with which the agency has shared information with Congress, the media, and the public, concern has been raised about the board’s independence. In a February 6 letter to President Bush, Democratic members of the House Science Committee requested that the charter be changed to require that the board report directly to the White House and to Congress, rather than to the NASA administrator. At the hearing, Rep. Bart Gordon (D-Tenn.), the ranking member of the House Science Committee’s Space and Aeronautics Subcommittee, sharply rebuked O’Keefe, stating that the so-called independence of the board “did not pass the smell test.” Other members of the two committees said that the investigative board would be greatly enhanced with the addition of more scientists and engineers to the existing mix of military and civilian government employees.

In response to the criticism, O’Keefe subsequently added Sheila Widnall, an MIT engineering professor and former Air Force secretary, and Roger Tetrault, retired chief executive officer of McDermott International, Inc., an energy services firm.

Because the addition of these new members still did not suffice to stop criticism, three additional appointments have been made: Sally Ride, a former astronaut and physics professor at the University of California at San Diego, John Logsdon, a space policy professor at George Washington University, and Douglas Osheroff, a 1966 Noble Prize winner in physics and Stanford University professor. The appointments, especially of Osheroff, responded to criticism that the investigative team lacked the scientific caliber of Richard Feynman, the Nobel Prize-winning theoretical physicist whose presence proved to be so valuable in the investigation of the Challenger disaster.


“From the Hill” is prepared by the Center for Science, Technology, and Congress at the American Association for the Advancement of Science (www.aaas.org/spp) in Washington, D.C., and is based on articles from the center’s bulletin Science & Technology in Congress.

The Case against New Nuclear Weapons

Does the United States need nuclear bombs to destroy enemy bunkers and chemical or biological weapons? For some people, the answer is clear. Strong proponents of nuclear weapons speak of the need to give the president every possible military option, and the Bush administration’s 2002 Nuclear Posture Review reflects this affirmative response. On the other side, committed opponents maintain that no potential military capability could justify designing–let alone building or using–new nuclear bombs. For both camps, the details of the proposed weapons are irrelevant.

Yet neither of the simple arguments for or against new nuclear weapons is broadly accepted. The United States does not develop every possible weapon simply to provide the president with all options; policymakers have, for example, judged the military value of chemical weapons insufficient to outweigh the political benefits of forgoing them. On the other hand, the nation has never rejected nuclear use outright and has always reserved the possibility of using tactical nuclear weapons. Indeed, until the end of the Cold War, such weapons were central to U.S. military thinking.

Despite their disagreements, the people engaged in debate over new nuclear weapons have tacitly agreed on one thing: that these weapons would deliver substantial military benefits. Thus, they have cast the dilemma over new nuclear weapons as one of military necessity versus diplomatic restraint. But this is a false tension: New nuclear weapons would, in fact, produce few important military advances. Yet their development would severely undercut U.S. authority in its fight against proliferation.

Advocates of new tactical nuclear weapons have tended to focus shortsightedly on simple destructive power. In particular, most arguments for bunker-busting nuclear weapons ignore the difficulty of locating threatening bunkers in the first place. During the Gulf War of 1991, military planners painstakingly assessed the potential consequences of bombing Iraqi chemical weapons facilities, debating nuclear and nonnuclear weapons, as well as the option of leaving the bunkers alone. Ultimately, the military used conventional weapons to bomb every known facility. Subsequently, however, international weapons inspectors, aided by Iraqi defectors, discovered that those targets had been the mere tip of a vast Iraqi system for producing and storing weapons of mass destruction. Had the military used nuclear weapons to bomb all known chemical facilities during the Gulf War, the United States would have made barely a dent in Iraq’s deadly capability while incurring massive political backlash as people died from the accompanying nuclear fallout.

The challenge of finding hidden targets is the norm, not an exception. In Afghanistan, U.S. efforts to eliminate the Taliban and Al Qaeda were hindered by the difficulty of tracking down their underground hideouts. Intelligence technology, which relied heavily on detecting mechanical equipment, power lines, and communications systems to identify hidden facilities, floundered in the face of a backward enemy who employed none of the technologies being searched for. Osama bin Laden is still alive not because the United States lacked powerful weaponry, but because U.S. intelligence could not find him in the caves of Tora Bora.

Still, an inability to locate all enemy weapons stockpiles and underground leadership targets is not an argument for leaving alone those that can be found. But proponents of nuclear weapons have overstated the capability of the nuclear option even in cases where targets can be located, while underestimating nonnuclear potential. In particular, proponents have contended that nuclear weapons are needed to compensate for difficulties in precisely locating underground targets; that they are needed to neutralize chemical and biological agents and thus prevent their deadly use; and that only with nuclear weapons will there be no “safe havens” (no depth below which enemies are safe). However, each of these arguments can be debunked, as illustrated in the following examples.

Inadequate intelligence

Libya has been suspected of producing chemical weapons at its Tarhunah complex, located 60 kilometers southeast of the capital city of Tripoli and hidden in tunnels and bunkers under roughly 20 meters of earth. The problem is that U.S. analysts have not been able to produce an exact blueprint of the underground chambers. This lack of precision leads some observers to argue that although the facility is, in theory, shallow enough to be destroyed with conventional arms, uncertainty concerning its location may require the large destructive radius of a nuclear weapon to compensate.

A nuclear weapon detonated at or near the surface produces a large crater and sends a massive shock wave into the ground. Underground facilities within this crater are destroyed, as are facilities slightly outside the zone by strong stresses that rupture the earth. Based on the intelligence community’s knowledge (even given its uncertainty) about the Tarhunah facility, it is apparent that a five-kiloton ground-penetrating nuclear weapon could destroy it. This attack would produce a moderate amount of nuclear fallout, the precise nature of which would depend on whether the weapon was detonated inside the facility or in the surrounding earth. To be conservative, military planners would have to assume the latter. Such a blast would kill every human being within approximately 15 square kilometers, according to calculations by Robert Nelson of Princeton University. Although this zone would not reach Tripoli, concerns about fallout would require medical monitoring for civilians as far as 20 kilometers downwind from the facility. U.S. troops in the zone would have to halt operations or risk being exposed to fallout. Troops could not enter the immediate facility area to inspect damage or collect intelligence, even with protective gear, which is ineffective against nuclear fallout.

Alternatively, there are a number of nonnuclear approaches that are already available or could be developed for destroying or neutralizing this type of complex. If the main bunker could be more precisely located, then a single earth-penetrating conventional bomb could reach it. A missile the length of the current GBU-28 penetrator, modified to strike the surface at twice the GBU-28’s current impact speed, could smash through the cover of earth and reinforced concrete and destroy the facility with conventional explosives. This suggests that the military should focus on improving intelligence capabilities, particularly the ability to precisely map underground targets that have already been located, rather than on devising ever more powerful weapons.

Even if the facility cannot be precisely localized, several conventional penetrator missiles used simultaneously could mimic the effect of a small nuclear weapon. One scenario would be to mount multiple sorties to cover the entire suspected facility area. In a more sophisticated approach, the military is now developing a “small-diameter bomb” that packs several penetrating missiles into the payload of a single aircraft–essentially, an underground version of the ubiquitous cluster bomb. Extending the small-diameter-bomb concept to missiles the length of the GBU-28 would enable simultaneous delivery of as many as 24 penetrating missiles, at least several of which would be expected to penetrate the facility.

Still other options are available. If the facility were operating, then conventional electromagnetic pulse weapons–recently added to the U.S. arsenal–might be applied to destroy or disable equipment inside. Because an electromagnetic pulse can easily travel down a bunker’s power and ventilation ducts, equipment inside would be vulnerable to attack. Such weapons could be delivered by cruise missile.

Advocates of new tactical nuclear weapons have tended to focus shortsightedly on simple destructive power.

In an indirect approach to rendering the facility useless, cruise missiles could be used to temporarily block its entrances. It also would be possible to establish a “no-personnel zone” or “no-vehicle zone” around the facility. A range of intelligence assets, such as spy satellites, would be trained on the area surrounding the complex, and any attempt to move material into or out of the facility would be stopped. Although the facility itself might continue to produce weapons, those weapons could not be removed and used on the battlefield. These approaches would be limited by the need to continually devote assets to a single facility or to mount repeated attacks; if there were many simultaneous targets of concern, the method might not prove feasible.

In each case of applying conventional weapons, collateral damage due to chemical dispersal would be minimal outside the facility. Inside, chemical agents would be dispersed, but U.S. troops inspecting the area could mitigate the dangers from these by wearing protective gear.

Agent defeat

Proponents of nuclear weapons for attacking stockpiles of chemical and biological agents, called “agent defeat weapons,” typically argue that the biological or chemical fallout produced by a conventional explosive attack can be more deadly than the fallout produced by a nuclear weapon. This argument misses two crucial points: In many cases, nonnuclear agent defeat payloads can avoid spreading chemical and biological fallout; and the fallout from a nuclear attack, though perhaps smaller than the potential biological or chemical fallout, is still prohibitive.

Consider a hypothetical example from Iraq, which is suspected of retaining stockpiles of weaponized anthrax and is known to use hardened bunkers extensively. A typical bunker might be 20 meters in height and cover an area measuring 400 square meters, have walls that are five meters thick and a roof of reinforced concrete, and be buried under five meters of earth. Built during the absence of United Nations weapons inspections, the bunker’s existence has become known to U.S. intelligence through satellite imagery captured during its construction. It is believed to contain several tons of anthrax in storage barrels, though in the absence of a continuing ground presence, this cannot be confirmed.

A 20-ton penetrating nuclear weapon (if it were developed) detonated at the floor of the facility would incinerate its contents, preventing the dispersal of anthrax. But it would also spread nuclear fallout. Deaths from acute radiation poisoning would be expected as far as one kilometer downwind. People nearer than four kilometers downwind would, if not evacuated quickly, receive a radiation dose greater than that received by a nuclear worker during an entire year.

Nonnuclear payloads might, however, spread less collateral damage while avoiding political problems. A penetrating bomb carrying a fragmenting warhead and incendiary materials could be used. The warhead would break the anthrax out of any exposed containers, and the heat from the incendiary materials would neutralize the anthrax. Containers that were heavily shielded might not break open, but although the anthrax would not be destroyed, neither would it be released. The bunker would remain intact.

Alternatively, a penetrating bomb carrying submunitions and neutralizing chemicals could be used. The submunitions would spread throughout the bunker and release the anthrax from its containers, even if it were stored behind barriers, and the neutralizing chemicals would render the anthrax inert. The bunker would probably remain intact, although it could be breached if it had been poorly constructed.

U.S. planners may not want to directly attack the bunker. Instead, a watch could be placed on the facility using satellite imagery coupled with armed unmanned aerial vehicles. Anyone or anything attempting to enter or leave the bunker would be destroyed, making the anthrax inside unusable.

Deep burial

Among proponents of new nuclear weapons, the most consistent error is the assumption that they would be silver bullets, leaving no underground facilities invulnerable to their effects. But such is not the case. Even the two-megaton B-83 bomb, the highest-yield weapon in the U.S. arsenal, would leave unscathed any facilities buried under more than 200 meters of hard rock. In contrast, functional defeat approaches–sealing off entrances rather than directly destroying the bunker–have no depth limitations.

To better understand this, consider North Korea’s Kumchangri underground complex, which was once suspected of housing illicit nuclear weapons activities. The depth of the facility, built into the side of a mountain, is not publicly known, but its main chamber may quite possibly be deeper than 200 meters, putting it out of the range of even megaton-sized, earth-penetrating nuclear weapons. Even if the facility were only 150 meters underground, a one-megaton penetrating nuclear weapon would be required to destroy it, and the resulting nuclear fallout would have enormous consequences. If the wind were blowing southwest, then the North Korean capital of Pyongyang, 80 miles away, would have to be evacuated within hours of detonation to prevent the death of more than 50 percent of its residents from radiation poisoning. If the wind were blowing north or northwest, then residents of several large cities in China would have to be evacuated immediately. And if the wind were blowing south, then residents of several large cities in South Korea, as well as U.S. troops stationed in the DMZ, would have to be evacuated within hours to avoid numerous radiation deaths.

Alternatively, regardless of the facility’s depth, military planners could seek to disable rather than destroy the facility. Cruise missiles could be used to collapse entrances to the bunker. Entrances, however, might be reopened quickly, requiring repeated sorties to keep the facility closed. Thermobaric weapons, which debuted in Afghanistan, could be used to send high-pressure shock waves down the tunnels, possibly destroying equipment inside the facility.

An “information umbrella” approach also might be applied. The United States, possibly together with allies, would declare that no North Korean vehicles would be allowed to come near the facility. This curfew would be monitored using surveillance assets, and any vehicle attempting to enter or leave the facility would be destroyed.

Misguided federal efforts

Despite the limitations of nuclear capabilities, some policymakers are marching ahead. For the past year, Congress has been focused on a new weapon system called the robust nuclear earth penetrator (RNEP), a modification of either the B-61 or B-83 bomb that would have improved earth-penetration ability. The Department of Energy (DOE), in its 2003 budget request, asked for funding to begin a three-year, $45 million “feasibility and engineering” project that would include “paper studies” of the RNEP and might possibly “proceed beyond the mere paper stage and include a combination of component and subassembly test and simulation,” according to John Gordon, then the administrator of the DOE’s National Nuclear Security Administration.

This effort would be misguided. Misunderstanding of weapons technology and engineering has consistently marked congressional debate over the RNEP, and any further discussion first requires setting the record straight. Some observers have incorrectly characterized the RNEP as a low-yield “mini-nuke,” implying that it is more usable than other nuclear weapons. But as Rep. Curt Weldon (R-Penn.) pointed out correctly during House debate, the RNEP is not a mini-nuke–indeed, it is a very large, clumsy weapon.

Yet confronted with the observation that nuclear weapons are militarily useless, many people embrace this as a virtue, arguing that these weapons are for deterrence, not for warfighting. Their claim, however, is based on dubious deterrence arguments left over from the Cold War. At the core of this claim is the contention that deterrence works only when the United States threatens what the enemy values most; that many enemies are so foreign that it is impossible to reliably judge what they value; and that the United States therefore must be able to robustly threaten every asset of theirs.

This argument is not compelling. Consider, for example, the recent debate over the value of deterrence in confronting Iraq, in which analysts and politicians split into two camps. One faction claimed that Saddam Hussein is fundamentally undeterrable, and thus the United States must disarm him. The other argued that Saddam is rational and deterrable, and thus the United States should not attack. No camp argued that it is impossible to deter Saddam only because the United States currently has no earth-penetrating nuclear weapons that place his underground bunkers at risk–and that Iraq therefore should be attacked.

Some proponents of the RNEP have sought to dodge detailed debate over its utility, arguing that without the proposed feasibility study it is impossible to determine whether the weapon would be useful. But to understand what a feasibility study can and cannot accomplish, consider one of the leading RNEP proposals: modifying the two-megaton B-83 bomb to add ground-penetrating capability. Using basic physics, it is possible to estimate the weapon’s potential penetration depth and, in turn, place upper and lower bounds on fallout hazards and destructive capability. Indeed, many people, including myself, have made such estimates–and doing so certainly does not require a multiyear, multimillion-dollar feasibility study.

The proposed study would build on these basic calculations, looking more carefully at engineering limitations and narrowing the basic estimates. For example, laboratory scientists might conclude that although it may be scientifically possible to build a missile capable of achieving a 900-meters-per-second earth impact, it would be impossible to engineer the missile to withstand such an impact shock. This failure would reduce the estimate of earth penetration, decreasing the projected destructive potential while increasing the expected fallout hazard.

Congress must more seriously consider the advantages of nonnuclear options.

Given preliminary scientific estimates, military and political decisionmakers–before initiating the research project–should be able to come to one of three conclusions regarding the RNEP:

First, either its maximum destructive capability is too small, or the minimum fallout hazard too large, to make further development worthwhile. In this case, the proposed study is unnecessary.

Second, its minimum destructive capability is large enough, and the maximum fallout hazard small enough, to warrant development. In this case, the argument for a feasibility study as a preliminary step is a ruse; the more honest position would be to immediately endorse full development and deployment of the new weapon.

Third, depending on where within the preliminary estimates the destructive capability and fallout hazards fall, the weapon may or may not be useful. In this case, a feasibility study would be essential to refining the estimates and making a decision on proceeding.

The third possibility, however, is unlikely. The fallout hazard that such a bomb would produce is enormous. It could quickly kill people hundreds of kilometers downwind, and would contaminate cities even further away. There are only two reasonable conclusions from this exercise: Either one can decide that this is excessive collateral damage and oppose the RNEP, or one can decide that no collateral damage is excessive for this mission and support the RNEP. To claim a need for further engineering study of the robust nuclear earth penetrator is disingenuous.

Broader discussion needed

Though many people now maintain that the military has little interest in tactical nuclear weapons, policymakers continue to contemplate developing and deploying them. This will, unfortunately, remain the natural state unless political decisionmakers force a change. Although designers of nuclear weapons have a built-in imperative to seek nuclear solutions to military problems, there is little to be gained by the uniformed military from pushing back. It falls to Congress to actively solicit the advice of military thinkers on the utility or lack thereof of new tactical nuclear weapons.

To date, only the Senate Committee on Foreign Relations has devoted substantial hearing time to tactical nuclear weapons. But these weapons have not only political but military liabilities. To explore these issues, the House and Senate Armed Services Committee should convene hearings on the robust nuclear earth penetrator and on tactical nuclear weapons more broadly. The committee should solicit input from retired military officers and from individuals who have spent time understanding both the nuclear and nonnuclear options. Only by making direct comparisons will policymakers be able to find agreement on a way forward.

Time to Sign the Mine Ban Treaty

Throughout the twentieth century, U.S. foreign policy was pulled in contrary directions by the muscle-flexing appeal of realism and the utopian promise of liberalism. At the end of the cold war, many pundits suggested that liberalism, having won the ideological battle against Marxism, was now poised to shape and inform the policy realm. But as the debate over banning antipersonnel landmines (APLs) makes clear, the tension between realism and liberalism is alive and well.

The United States has refused to sign the international Mine Ban Treaty (MBT) because, it claims, APLs are vital to the security of U.S. soldiers deployed along the 38th parallel in Korea. At the same time, the United States played a key and perhaps decisive role in initiating the mine ban movement in the early 1990s, and it is the single largest supporter of international mine-clearing efforts in terms of funding, training, and technological support.

It is not unusual for the United States to express its commitment to liberalism by espousing an ideal (such as a mine-free world, diplomatic resolution of conflict, sustainable development, or an end to racial discrimination), and then, because of concerns about ceding relative power, demonstrate its equal commitment to realism by refusing to take the concrete steps other countries believe necessary for achieving the ideal. In the case of the mine ban, closing the gap between ideals and policies will be more difficult than is commonly thought, because although the Korea rationale for declining to sign the MBT is diminishing in importance, a new rationale for using APLs has emerged: low-intensity conflicts in remote areas in which special forces are involved. It appears that the United States will, unfortunately, continue to espouse the ideal of a mine-free world while refusing to take the most promising steps toward achieving it.

Although its position has some credence, the United States is wasting yet another opportunity (as it has done recently on the issues of climate change, racism, and biological diversity) to solidify the norms and institutions it has championed since World War II and on which its privileged position in world affairs depends. By not supporting initiatives such as the MBT, the United States is also putting unnecessary stress on multilateral structures such as NATO and the United Nations. This is a mistake at a time when the U.S. security agenda, centered on terrorism, Iraq, and North Korea, desperately needs the multilateral support the United States has cultivated for five decades.

Five hundred years ago, Niccolo Machiavelli, the father of realism, argued that in the long run, political position depends more on good laws than on good arms, although the latter always have a role. From Machiavelli’s perspective, finding the proper balance between law and force is the key to maintaining political status and security. Today, placing some constraints on the use of force and adopting a higher level of commitment to international law as a solution to pressing global problems would best serve the long-term interests of the United States. In this regard, the costs of signing the MBT are small, whereas the benefits are great.

The human costs

In the mid-1990s, the problem of APLs entered the global agenda with a speed and urgency rarely seen in international politics. Bolstered in part by the publicity brought to the issue by high-profile figures such as Princess Diana and Queen Noor, its sudden visibility had its roots in the efforts of a network of nongovernmental organizations (NGOs) that included Human Rights Watch, Doctors Without Borders, and the Vietnam Veterans of America Foundation.

Mine ban advocates argued that these victim-activated devices often remained in the ground long after the wars in which they were deployed had ceased, ready to kill or maim noncombatants. Their long lives, ease of use, low cost, and high impact and the indiscriminate way in which they are often emplaced and then forgotten had, the advocates claimed, created a genuine humanitarian disaster.

By 1990, according to one estimate, as many as 110 million landmines had been buried in more than 60 countries. During the 1990s, APLs killed or maimed 26,000 people a year, 80 percent of them civilians. Children, attracted by the toy-like appearance of many mines, proved especially vulnerable. The long-term health costs have been enormous, and many countries have been unable to provide adequate care or rehabilitation services. Landmines have also seriously hindered postwar resettlement efforts and undermined agricultural productivity, because homesteaders and farmers have been afraid to use land that had been mined. In addition, removing mines is an extremely slow and expensive process. They are located and dismantled one at a time using metal detectors, search dogs, and other labor-intensive means. For most of the 1990s, the number of mines laid outstripped the number of mines cleared.

Recognizing these problems, a number of countries acted unilaterally to restrict and prohibit the use of APLs, and a sizeable portion of the international community mobilized against their use. Initially, advocates of a ban tried to negotiate an agreement through the UN system, but were frustrated by the slowness of the United Nations and the opposition of several major powers. They thus decided to work outside the UN framework. In 1996, Canada hosted an international conference entitled “Towards a Global Ban on Anti-Personnel Landmines.” Representatives from 70 countries and 50 NGOs met in Ottawa and called for a total ban on the use of APLs. This meeting led to further negotiations in Oslo and ultimately the December 1997 signing of the Mine Ban Treaty.

In its final form, the MBT strictly prohibited the use, stockpiling, production, and transfer of APLs. Article 2 of the treaty makes clear that only victim-activated mines designed to injure or maim people are covered. Of the principal types of mines used by the United States, self-deactivating APLs, which turn off automatically after a specified period of time, are included in the ban, but user-activated mines (mines set off by the troops that deploy them when enemy vehicles, as opposed to people, draw near) are not. User-activated mines include the Claymore, which upon detonation projects 700 steel balls in a lethal 60-degree arc. Troops usually set them in the ground attached to a firing device (pull wire) that is run at least 16 meters to another site. When enemy vehicles approach, it is set off by the troops.

At present, the treaty has been signed by 145 governments but not by the United States, which claims that the loss of this military capability would jeopardize forces deployed abroad. A number of frontline states–India, Pakistan, Russia, and China–have taken a similar position.

Landmines in South Korea

Landmines serve a number of important but largely defensive battlefield functions. They channel enemy forces into specific areas where they are more vulnerable to direct and indirect fire. APLs and antitank landmines (ATLs) also protect the flanks of armed forces, the weakest point of defense. In both these roles, mines provide advance warning of an attack and delay an enemy advance as invading forces try to breach these deadly obstacles. Mines also have what is called a “force multiplier effect” by causing direct casualties and by enhancing the effectiveness of other weapons. One study on mine warfare estimated that reinforcing a defensive position with mines enhances the effectiveness of all other defensive weapon systems by a factor of between 1.5 and 2.5.

According to U.S. policymakers, APLs play an indispensable role in defending South Korea from North Korea–the primary justification for the U.S. refusal to sign the MBT. Indeed, at first blush, the security situation on the Korean Peninsula might seem to provide a strong case for retaining APLs. The hostile peace that emerged after the 1953 Korean War continues today. The areas bordering the narrow demilitarized zone (DMZ) between the two countries are the most fortified in the world. The situation is particularly tense because the South Korean capital of Seoul and a large part of North Korea’s million-man army are both located close to the DMZ. Under these circumstances, landmines are viewed as a deterrent to a North Korean invasion, as well as delaying and degrading an assault should deterrence fail. They provide time, U.S. officials assert, for combined U.S. and South Korean forces to react without a significant loss of ground.

Even ex-U.S. military commanders in South Korea believe that APLs are not critical in maintaining the peninsula’s security.

In reality, however, the case for the use of APLs in South Korea is a weak one. First, the military balance on the peninsula tilts strongly to the South, according to regional and military experts. South Korea spends three times as much as the North on defense, which has led to a considerable qualitative and technological edge over the North–an edge that compensates for the North’s numerical superiority. About half of North Korea’s major weapons date to the 1960s; the rest are even older. North Korea’s aging T-62 tanks square off against fully modernized South Korean Type 88s and U.S. M1s. In terms of artillery, the North’s largely towed arsenal is outgunned by U.S. and South Korean self-propelled howitzers and multiple-launch rocket systems. In terms of aircraft, North Korea’s antiquated Russian MIG19s and MIG21s would have to face modern AH-1 (Cobra) and AH-64 (Apache) attack helicopters and F16 fighters. In short, U.S. and South Korean forces would easily have control of both the ground and the skies.

Equally important, allied weapon systems would be served by sophisticated all-weather, day-night intelligence assets, including overhead reconnaissance satellites, radar-imaging aircraft, and ground-based detection systems. This equipment would not only coordinate friendly fire during an assault, it would degrade one of the North’s perceived advantages–surprise–by detecting any large-scale massing of armored vehicles and troops prior to an attack.

In addition to possessing superior firepower and intelligence, the U.S.-South Korean forces would have the benefit of fighting on terrain that naturally favors the defender. Most of the land between the two countries is rough and mountainous; its few flat areas are rendered nearly impassible by marshy rice paddies. The natural axes of advance from the North have been mapped, targeted, and retargeted by numerous allied weapon systems.

Another factor is that the North’s military readiness has been seriously eroded by economic decline. The loss of its cold war allies, its economic and political isolation, and bad weather have combined to produce chronic fuel and food shortages. These conditions have been linked to the deaths of as many as 2.5 million North Koreans and the migration of thousands of refugees into China. Moreover, they have prevented the country from conducting large military exercises. Indeed, the Pyongyang regime may very well have collapsed without the fuel oil and food donations it has received from the West during the last decade.

All these factors significantly reduce the likelihood of a North Korean assault as well as the likelihood of success if Pyongyang were foolish enough to attempt an invasion. Brookings Institution military analyst Michael O’Hanlon recently reviewed the military balance on the peninsula–a review that included quantitative analyses of a range of combat situations, including North Korean use of chemical weapons–and concluded that the combined U.S. and South Korean forces not only could defend South Korea without reinforcements but could stop a North Korean advance well north of Seoul. Any North Korean offensive, O’Hanlon said, would be “stopped cold.”

Would landmines play a significant role in bringing about this outcome? Apparently not. The Department of Defense’s standing response plan to a North Korean offensive does not call for the delivery of APLs to counter an initial attack. Furthermore, according to one Pentagon study, the APLs already in place would slow a North Korean advance by only a few minutes. This is perhaps because the psychological impact ascribed to APLs is overrated. In many instances during the Korean War, North Korean and Chinese soldiers quickly breached allied minefields by simply incurring heavy casualties. There is little reason to believe that today’s highly disciplined, ideologically motivated North Korean soldiers would act otherwise.

Even former U.S. commanders in South Korea do not believe that APLs would significantly contribute to the country’s defense. In a May 2001 letter to President Bush, a group of retired officers wrote that “APLs are not in any way critical or decisive in maintaining the peninsula’s security.” In fact, they noted, APLs might actually endanger U.S. soldiers by interfering with a counterinvasion of North Korea.

These arguments against the usefulness of APLs in South Korea are by no means meant to trivialize the real military threat posed by North Korea. Its forward-deployed artillery pieces and its long-range surface-to-surface missiles are capable of unleashing a no-warning and potentially devastating attack on Seoul. It is also on the verge of possessing a viable nuclear weapon assault capability. These are serious threats that the West must act to neutralize. But none of these threats are affected by the presence of APLs. In short, APLs are not significant in deterring or defeating a conventional North Korean attack, and they are utterly ineffective in dealing with the other kinds of threat posed by the North. The Korean case does not serve well as the foundation of the U.S. refusal to sign the MBT.

A potential new role for APLs

Throughout the 1990s, mine ban advocates assumed that once APLs ceased to play a role in South Korea’s defense, the White House would agree to sign the MBT. Recently, however, the military has discovered a new role for APLs.

Many analysts today believe that the kind of conflict the United States is prepared to fight in Korea–two conventional armies squaring off against one another–is an anachronism. More likely are the types of operations that U.S. forces conducted in Haiti, Somalia, and Afghanistan: low-intensity conflicts in which U.S. troops face irregular forces and nonstate actors. Dealing with these threats may require small groups of forces to operate autonomously in remote, hostile areas. These light forces will have to be prepared to fight outnumbered and often without traditional backup air or artillery support. In these circumstances, the passive defenses provided by APLs could be important.

The lessons from Operation Enduring Freedom in Afghanistan are instructive in this regard. Although the United States established large bases at Bagram and Kandahar, the most damaging assaults against the Taliban and al-Qaeda were conducted by special forces units operating from remote bases in areas near the Pakistani border. These forces regularly deployed self-deactivating APLs and antitank systems to augment their defenses. In addition, during the Persian Gulf War, special operations forces working behind Iraqi lines also used APLs, and they will likely be used in a new war with Iraq and in its aftermath.

New nonlethal technologies such as sticky foam and antipersonnel microwaves can serve the same function as APLs.

Still, the utility of APLs in these situations should not be overstated. User-activated mines such as the Claymore could provide the same functions as APLs. Moreover, new nonlethal technologies could substitute for APLs. Examples include sticky foam, which impedes the progress of persons and vehicles; acoustic devices that emit intense high-power sound energy that disorients foot soldiers; victim-activated high-tension nets that immobilize groups of advancing enemy forces; and antipersonnel microwaves that temporarily raise body temperature to extremely uncomfortable levels. In addition, countries with highly effective special operations forces, including the United Kingdom and Canada, have managed to conduct combat operations without using APLs.

Despite the existence of alternatives, the Afghanistan experience will persuade many that APLs should be retained. The result will be continued reluctance to sign the MBT. But although the United States may benefit militarily by doing so, the long-term cost to the country will be considerably greater. From the perspective of much of the world, U.S. noncooperation on the landmine issue calls into question U.S. capacity to lead. It also risks undermining the multilateral structures that the United States has nurtured for more than 50 years and that have been integral to attaining and maintaining its preeminent position in world affairs.

The U.S. policy dilemma

At least since World War I, U.S. foreign policy has been guided by two priorities: the creation of a new world order based on liberal values and practices, and the preservation of U.S. preeminence through realist strategies of military dominance. During the cold war, the existence of a superpower rival made clear to all the severe constraints on multilateralism, and provided a virtually automatic justification for the use of force to address a wide range of problems. This helped keep tensions between these two foreign policy impulses from becoming unmanageable, and also persuaded many that they could be harmonized without much difficulty.

With the demise of the Soviet Union, the automatic justification for the use of force disappeared, along with many obstacles to multilateralism. As a stream of statements from UN officials and allied leaders makes clear, much of the world believes that the United States should now act on its own rhetoric and play a leadership role in consolidating and expanding the multilateral structures appropriate and necessary for global peace and prosperity. U.S. behavior is being carefully scrutinized: Is it using its unrivaled power responsibly as a world leader, or self-interestedly to maximize short-term national interests?

Initiatives such as the MBT offer low-cost opportunities for the United States to affirm its right to lead by sending clear messages to the rest of the world about its commitment to multilateralism. When the United States refuses to expand multilateral structures on pressing issues such as victim-activated mines, where it is clear that the costs of compliance are low, the world inevitably raises questions about the character of U.S. leadership and looks anew at its commitment to the guiding principles of larger structures such as NATO and the UN Security Council. Putting these institutions in jeopardy at a time when multilateral cooperation is clearly needed to combat terrorism and other transnational security threats is a grave mistake. The U.S. position in the world depends as much on the multilateral institutions that it helped to establish as on its national capabilities. Signing the MBT will help to reaffirm its commitment to providing the sort of world leadership that other countries will want to support.

Improving Scientific Advice to Government

Congress and federal policymakers draw on independent expert panels for scientific and technical advice in addressing some of society’s most controversial and economically significant issues. It is imperative that these panels operate in the most productive manner possible. Yet recent “reforms” intended to increase the panels’ transparency and reduce the potential for bias and conflict of interest may be weakening the advisory process.

Concern centers particularly on two of the most important sources of advice: the National Research Council (NRC), which is part of the National Academy of Sciences (NAS), and the Science Advisory Board (SAB), which serves the Environmental Protection Agency (EPA). Both the NAS and SAB create panels to develop consensus opinions on some of the most controversial scientific topics of our time, testing science’s predictive tools and directly affecting the outcome of federal policies involving billions of dollars in private and public outlays. Interest groups acknowledge the crucial influence of the panels by focusing considerable resources on how NAS and SAB panels are populated and managed.

Beyond these similarities, considerable differences separate the NAS from the SAB. Since the NRC’s creation during the Wilson administration, the NAS has convened several thousand panels and last year alone published 280 consensus reports on almost every conceivable topic of policy-related science. The SAB dates only to 1978 and–a crucial difference–advises a single agency on topics of the EPA’s choosing via an executive committee appointed by the EPA administrator. In many respects, the SAB is simply a more evolved and “institutionalized” federal advisory panel. It is fair to say that by comparison to the NAS, the SAB is much more beholden to its creating sponsor, vulnerable to influence by agency staff scientists, and far less robust in providing for accountability for the quality of its final reports.

In recent years, both the NRC and the SAB processes for rendering advice have undergone change in response to criticism, largely from activist nongovernmental organizations, regarding their supposed lack of transparency and the potential for bias and conflict of interest of panel members. NRC reforms were codified in 1997 in amendments to the Federal Advisory Committee Act, which were stimulated by a federal circuit court decision that animal rights activists won against the academy. The SAB adopted elaborate guidelines in the wake of a 2001 General Accounting Office report that was highly critical of SAB’s conflict-of-interest practices.

My perceptions of the undesirable side effects of these changes are impressionistic. No survey research has been done; no data have been collected. Yet I do have 30 years of experience serving on NAS committees, boards, commissions, and panels, and I follow the SAB in my law practice. Further, I have shared my ideas with a number of key participants in the NAS and SAB processes. They acknowledged the tendencies I describe, gave confidential examples from their experience, and urged me to share my views for discussion.

At the heart of the matter, the processes employed by the NAS and the SAB in forming advisory panels and the processes by which the panels write their reports and the conveners review those reports, have become much more formal and proceduralized. These processes are now more open to stakeholder input and influence, whereas the panels themselves have been constituted to be more neutral on the issues and more balanced in interests represented. As a result, panel meetings increasingly resemble public hearings before neutral science judges.

These developments may attract praise in a democratic society that highly values procedural due process and fairness. But, on balance, their unintended consequences may well end up diluting the quality of NAS and SAB scientific advice. Many scientists now look on with increasing concern, and some of them have become reluctant either to serve on panels or to recommend others to serve.

One concern centers on privacy, which, in turn, is part of a wider issue regarding injury to reputation and career from unwarranted disclosures and advocacy efforts. For example, new SAB rules require panel candidates to disclose all of their financial, professional, and personal ties–and to do so as well for their spouses and dependent children. Candidates are required to complete a lengthy questionnaire that includes listing assets, affiliations, research projects, consultancies, liabilities, and compensated expert testimony. The rules also call for a period of open public commentary after “biosketches” of potential panel members are released. Critics now argue that all this does not go nearly far enough.

Many scientists worry that efforts to make panel deliberations as transparent as possible mean that essentially all meetings will be conducted in a public fishbowl with the attendant risks of stakeholder “grandstanding” and opportunities for interest-group pressure on individual members who speak their minds openly. The NAS mandate, for example, now requires a 20-day period for public comment on committee slates, advance public notice of committee meetings and agendas, a public access file for all materials provided by anyone to committees, a publicly available summary of the closed sessions, and after-the-fact public disclosure of the names of peer reviewers.

There is concern, too, about the increased pressure on conveners to scrub panels vigorously for conflict and bias. This sometimes leads to selection of panel members for mere neutrality or disinterest–or even selection of less qualified candidates in order to balance panel membership by ideology, gender, age, and geographic distribution. The NAS and the SAB now appear to believe that if they solicit nominees widely from stakeholders and professional groups, obtain detailed information about panel candidates, vet this information publicly with stakeholders, limit panel service to conflict-free survivors, “balance” final membership to counteract bias, and maximize the openness of deliberations, then they will obtain the best scientific advice that consensus panels can provide.

But these measures can discourage the best scientists from serving because of the paperwork involved, the public disclosure of personal information, the attention that any knowledgeable scientist almost inevitably will attract from one interest group or another, and the stigma attached to anyone passed over by conveners reluctant to risk a fight over a nominee. Panel members are generally volunteers. As much as they want to provide public service and garner the prestige of a membership on a blue-ribbon panel, the best and the brightest scientists have plenty to do at their workbenches, without running the gauntlet of panel service.

The conveners’ primary concern should be populating panels with the best scientific and technical expertise available.

There also is a real risk that valuable technical expertise will be lost in the quest for neutrality. The problem often comes down to how conveners view the twin threats of conflict of interest and bias. The “black-letter law” of conflict and bias, much discussed in a large body of literature, comes down to this: Direct or apparent financial conflict disqualifies and may be illegal unless an explicit waiver for otherwise unobtainable expertise is secured. Bias does not ordinarily disqualify, but once disclosed should be balanced against the countervailing biases of other nominees. But this “settled” practice clashes with the reality that financial, professional, and ideological concerns are not clearly separable. Each candidate reflects a complex mix of experiences, judgments, and prejudices. What is more, the greater his or her expertise, the more likely that the candidate will appear to have at least some financial conflicts and biases, however mild, based on employment, personal wealth, prior publications, public statements, personal insights, and research agendas. No candidate is capable of a pure passion for dispassionate public service.

The NAS points out that stakeholders (and even potential nominees who are judged too conflicted or biased to serve) can present their views at one of the several open committee meetings now required by internal mandates or law. SAB panels also have made greater use of public open-mike sessions. But this loss of panel expertise, coupled with hearings to compensate, is not a commendable alternative to the more informal consensus panels, whatever their faults, that existed previously.

This is not to say screening for conflicts and biases is inappropriate. Acute and direct financial conflicts, and intensely advocated policy positions, create a strong potential for mischief. However well-intentioned, a scientist so burdened would be hard-pressed, and probably unlikely, to leave these behind upon entering the panel’s meeting room. Nor can even the most carefully managed waiver overcome this problem. No one can reasonably expect a scientist to function effectively under conditions that require a direct, clear choice between sacrificing scientific judgment and either defying an employer’s wishes or abandoning an intensely advocated public position.

Steps to improvement

As the bottom line, the NRC and the SAB already do a good job of assembling panels and seeking to create an environment that fosters collegiality, the highest professional standards, and reliable assessments of the science at issue. Still, certain corrections and additional measures do seem warranted to protect the integrity of consensus science advice to government.

Against the backdrop of recent reforms and the principles of democratic legitimization that drive them–due process, transparency, public participation, and impartiality–the recommendations made here may seem mildly reactionary. At the very least, they may echo a bygone era (if it ever existed) when society was less contentious and more trusting, when professional norms and peer pressure were stronger, and when scientists seemed more capable of impartial public service. But the key questions here are of degree rather than absolutes; tendencies rather than extremes.

  • Conveners of consensus scientific panels should continue to vet candidates publicly, both for conflicts of interest and for passionate biases that would have to be balanced by selecting an equally passionate member of the opposite persuasion. Although this may increase the risk of a “hung” panel, the appearance of impartial, accurate scientific consensus is vitally necessary to federal policymakers and the public. The conveners might try to be less beholden to stakeholders, though, if a candidate has needed expertise. Also, conveners should limit the opportunity for public comment to the slate actually proposed to serve. Panel formation should not be a competition; no one’s reputation should have to suffer a public airing and rejection after volunteering to serve.
  • The conveners’ primary concern should be populating panels with the best scientific and technical expertise available. Balancing interest groups has no place in panel formation; balancing disciplines does. The important thing is to ascertain in advance if a difference in scientific opinion exists that requires different viewpoints to be represented. Remember: These panels offer expert advice; they do not make policy.

In addition, a variety of nonscientists and near-scientists–for example, attorneys, ethicists, and political scientists–serve on NAS and other panels. Again, their expertise is crucial, but their policy views peripheral. Their presence is perhaps necessary in view of the expanding range of advice Congress and the agencies seek. But panels of scientists should listen very critically to the odd lawyer or philosopher in their midst, since balancing all types of relevant expertise is impractical. In the end, panels should pronounce on science as the scientists see it, not as advised by others. Further, conveners should resist taking on policy-laden topics best left to other venues. Science cannot assume the responsibility to objectively and “scientifically” resolve tangled policy issues in which science may play only a supporting role. Consensus science panels are but one part of the federal policy selection process. Economic, political, and equity considerations belong in final federal policies but not in science panel reports.

  • Conveners should seek expertise despite the potential for conflict or bias. In casting a wide net for expertise, conveners will find that some of the best candidates have significant potential conflicts and biases. The most egregious of these should, of course, disqualify. But merely coming from industry or a nonprofit background should not disqualify a candidate. Indeed, industry and nonprofit groups are prime sources for nominees. In every instance, conveners should value expertise over the use to which it may have been put. This means that conveners must thoroughly satisfy themselves in advance that their nominees’ potential conflicts and biases can be managed by self-disclosure and questioning by other panelists.
  • Some of the most controversial NRC and SAB panels involve the science that underpins health, environment, and safety regulation. Some public advocates have argued that because Congress has enacted precautionary legislation, and federal agencies have codified the precautionary principle in their risk-reduction rules and guidance, consensus panels of scientists must interpret and apply science to conform to these legal mandates. Yet it is not clear why science should be guided by these legal norms. Panels are not “science juries” rigidly bound by legislative instructions. They should be free to give their best scientific opinion on the technical merits of the bridging inferences and assumptions–that is, the “defaults” that enable science to support risk-averse regulatory standards despite significant scientific uncertainty.

The practices of science demand close attention to facts and data; empirical observation; rigorous logic; an explanatory chain of scientific causation, reproducibility, and predictability; and peer review. This is what scientists “do.” When asked to provide advice to government, this background fundamentally determines, and limits, what scientists may offer. Scientists want to be helpful, but they should not too willingly become handmaidens to regulators or to legislators who are all too eager to pass the policy buck by labeling an issue “scientific” when it really is not. Supreme Court Chief Justice John Marshall long ago wrote, “It is emphatically the province of the judicial department to say what the law is.” Risk analysts need to reassess the primacy of science in risk assessment. To paraphrase Marshall, it is the province of science to say what risk assessment is. EPA risk assessment and management admittedly require an accommodation between science and regulatory policymaking. But this accommodation should not be allowed to obscure the central role of science in determining what is scientifically plausible in risk assessment. Objective, neutral science is the necessary baseline from which precautionary risk management should proceed. Without this essential contribution from science, federal risk managers will be at sea as to what the best scientific estimate of risk is.

  • Conveners should discourage, and possibly prohibit, outside one-on-one exchanges between panel members and groups with a stake in the outcome of the panel’s deliberations. As but one example of such meddling, an academic scientist serving on an SAB panel reviewing the EPA’s 2000 draft dioxin risk reassessment has reported that lobbying by environmental organizations, industry, and EPA staff scientists alike was intense and contributed to a sense that the deliberative panel meetings were mere window dressing.
  • Prospective panelists should make an explicit and perhaps written commitment to listen with an open mind to the discussion and participate fairly and impartially, supplying facts and opinion without regard to employment or prior positions taken on the issues. In the event of violation, no legal obligation would attach; rather, the sanction would be embarrassment in the scientific community.
  • The best antidote to undue influence caused by financial conflict or personal bias is full understanding by copanelists. A confidential discussion among copanelists regarding bias and conflict (preceded by “ground-truthing” by the conveners) serves to alert copanelists to background and perspective that may shape a panelist’s contribution to the consensus effort. The NAS has long used such discussions and proven their worth.

Disclosure rather than disqualification is realistic and honest: The best scientists with the most to contribute will have a rich background of professional and public experience, with at least some type of financial stake and a record of expert opinion in the scientific and public literature. In this way, panelists are implicitly burdened to critically evaluate their copanelists’ contributions. This is more realistic than falsely assuring them that the conveners alone are responsible because they have conducted an inquiry and found the panel to be conflict-free and bias-balanced. Disclosure rather than disqualification makes very clear that face-to-face “peer review” and peer pressure are expected to produce consensus on a science-based report. Disclosure works best in concert with the personal commitment to neutrality and an emphasis on populating committees with strong expertise that will correct weak (that is, biased or conflicted) argumentation in panel discussions.

  • The practice of conducting open public meetings and presentations should, of course, continue. The problem is, there are too many of them, and they deflect attention away from the obligation of the conveners and panelists to obtain the best expertise for the panel itself, to deliberate, and to write a consensus report. Holding more confidential discussions, and fewer public sessions, permits a panel with limited time and budget to work more with each other and “bond” to lay a foundation for consensus. Confidential discussions shield panelists from stakeholder oversight, enable scientists to shed public positions, minimize opportunities for stakeholder grandstanding, and–most important of all–give peer pressure more leeway to function effectively. Coupled with limits on outside contacts, confidentiality can play a far more constructive role than any proponent of the current reforms has been willing to acknowledge.
  • In the end, the panel’s report is what counts. It should be carefully drafted by the panel itself (a condition of membership) or, on occasion, by staff under tight instructions and panel supervision. This practice already is the hallmark of the finest NAS reports. As with a judicial opinion, a panel report should rest on an underlying rationale that speaks for itself and can withstand the most withering critical reviews. Dissenting or supplementary views detract and may represent a last opportunity for stakeholder grandstanding. In consensus reports, anonymity lends strength, as the NAS has long realized. The NAS places a high premium on dissent-free final reports, and its multilayer approval process labors to reconcile views within a report’s consensus framework.
  • Panel reports should be independently reviewed and independently refereed. The NRC Report Review Committee (RRC) supervises the peer review of draft panel reports. More important, the RRC decides when the panel has adequately responded to its concerns–that is, the RRC decides when a panel report is of sufficient quality to be released publicly. The SAB has no such independently refereed peer review, although its executive committee plays some of this role. The value and seriousness of independently refereed peer review have been confirmed again and again at the NRC. Reports have been rewritten; major conclusions have been changed. The very existence of the RRC process exerts a discipline on NRC panels that appears to be lacking at the SAB.

These recommendations cannot cure all the ills of the current “reforms.” Nor should they be implemented inflexibly or in elaborate or formal guidelines, lest they fall into the same procedurally heavy trap into which current policies, particularly the SAB’s, have fallen. But if the NAS and the SAB are to avoid the public hearing and adversarial due process models more appropriate to the making of rules and policies, then the recommendations should receive consideration.

A Broader Vision for Government Research

In recent years, the question of “balance” has been a hot topic in science policy circles. Although federal support for biomedical research has increased significantly, support for the physical sciences, engineering, and social sciences has been flat or has even declined in real terms. This is an important issue and clearly needs to be addressed.

However, there is another sense in which the government’s research portfolio lacks balance. R&D plays an important role in meeting some national goals, such as national security, health, and space exploration. However, for other national goals, such as improving student performance, promoting sustainable development, and empowering more Americans to lift themselves out of poverty, it plays little or no role.

Noted science fiction author William Gibson once observed that “The future is here, it’s just not evenly distributed.” A similar comment could be made about the ability of the U.S. government and the research community to help create the future. Some agencies, such as the Departments of Defense and Energy, the National Institutes of Health (NIH), and the National Aeronautics and Space Administration, have the budget and the capacity to support research and innovation on problems that are related to their mission. Others, including the Environmental Protection Agency (EPA), the U.S. Agency for International Development, and the Departments of Education, Labor, State, and Housing and Urban Development, have little or no such capability.

This imbalance may limit our ability to support research in what Donald Stokes referred to as “Pasteur’s Quadrant” and what Gerald Holton and Gerhard Sonnert have called Jeffersonian science: research that pursues fundamental understanding but is also motivated by consideration of some practical problem. If the agency charged with advancing a particular set of national goals has little or no ability to support research, and if such research is not an attractive investment for firms, there may be important and systemic gaps in the nation’s research portfolio.

Moreover, these gaps are likely to persist over time. Federal budgets tend to be incremental. If an agency has little or no money for research, this is not likely to change from one year to the next. Furthermore, agencies don’t know what they don’t know. If they don’t have a strong relationship with the research enterprise, they are unlikely to appreciate how research funding might help them achieve their objectives or what scientific and technological advances they might be able to exploit. Finally, the lack of a vibrant community of leading researchers interested in a particular problem deters agencies from trying to create one. There may be lengthy startup costs associated with creating such a community, during which time there may be little or no payoff.

To evaluate the opportunity costs of these gaps, it is necessary to understand the benefits that can flow from creating a high-quality, well-supported, multidisciplinary community of researchers that are interested in helping to meet a particular national policy objective.

How research helps

First, research can advance the state of the art in an area of science and technology that will make it easier or less expensive to meet a given national goal, or even reframe the way that a policy issue is debated or discussed. For example, EPA currently pursues the goal of a cleaner environment primarily through command and control regulation, as opposed to supporting the creation and diffusion of technologies that minimize pollution in the first place. Greater emphasis on the latter approach might allow the United States to achieve its environmental objectives while reducing the economic costs imposed by regulations. Increased support for research in experimental economics and mechanism design would allow policymakers to understand when and how to use market-oriented mechanisms such as the EPA’s “cap-and-trade” program for acid rain. The old adage that “if all you have is a hammer, the whole world looks like a nail” is certainly true for federal agencies.

Second, research can help create a more rigorous basis for making decisions or setting public policy. For example, some areas of policy (such as welfare policy and adult training) have benefited significantly from randomized field trials analogous to the clinical trials conducted by medical researchers. The researcher randomly assigns some individuals to a control group and others to an experimental group that receives the “treatment” that is being evaluated. Researchers generally have greater confidence in conclusions reached by randomized field trials than those produced by nonexperimental research. Although randomized field trials are not always feasible, and they cannot shed light on all policy questions of interest, they are clearly underutilized in some important policy areas such as education. A recent analysis of 144 contracts for program evaluation awarded by the Department of Education between 1995 and 1997 found that only 5 used a randomized controlled design to measure the impact of federal programs. Social science research can also help shed light on the broader economic and social context that will shape and in turn be shaped by technological advances.

Third, government support for research, particularly university-based research, helps create or expand a workforce with specialized skills. Creating such a workforce may be critical to achieving a particular policy objective. Recently, for example, the federal government has acted to increase the number of undergraduate and graduate students with a background in cybersecurity, since the government was unable to recruit enough people with the necessary skills. Similarly, NIH has recognized that exploiting the revolution in genomics will require addressing the shortage of researchers in bioinformatics. Agencies that lack the ability to support university-based research, fellowships, and traineeships will not be able to help create this kind of specialized workforce.

Fourth, government support for research can lead to innovation in the development and use of new technologies. Researchers can start new firms and transfer their technologies to existing firms. They can suggest “figures of merit” that create new metrics for measuring technological progress or develop open technical standards that serve as the basis for entirely new industries. They can create test beds that offer insights into the impact of novel combinations of technologies in real-world settings. They can help dramatically lower the cost of a given activity, such as sequencing genomes and storing, transmitting, and processing information. Finally, once a research community with an interest in tackling a given problem has been created, its members will be able to identify future advances in fundamental understanding or technological capability that are feasible and relevant. Although these predictions can trigger a backlash if researchers overpromise and underdeliver, they can help policymakers understand the potential benefits of investments in science and technology.

One useful thought experiment is to imagine what might happen if a federal department with a limited capacity to support research suddenly grew a research agency with the resources, reputation, and expertise equivalent to those of the Defense Advanced Research Projects Agency (DARPA). Imagine that EPA or Education or Labor now has a research arm with a budget of several billion dollars. It has no “entitled constituencies” and can support research in industry, academia, and national labs. Commercial companies and industrial researchers are more likely to interact with it, because it has been given special exemptions from some burdensome procurement and personnel regulations. It can support activities ranging from fundamental research to technology demonstrations and can also establish cash prizes to stimulate technological advances in areas of interest. It recruits entrepreneurial program managers that are the peers of the leading researchers in the country and encourages them to take risks. It also empowers them to create portfolios of research projects that will lead to outcomes that are greater than the sum of the parts, as opposed to automatically funding those proposals that received a favorable “priority score” from a peer review process. Its solicitations command instant attention from the university research community, because instead of offering the standard grant that can support only one or two graduate student, it is willing to support teams of faculty and their graduate students. It is willing to “make the peaks higher” and does not feel compelled to support researchers in all 50 states. Finally, it has a unique ability to support multidisciplinary research, because it is not organized around scientific disciplines.

Although this is an admittedly idealized portrait of DARPA, it is useful for stimulating our thinking about what could result from expanding the research capacity of a given agency. What medium- and long-term goals might this new agency set? What would be the societal payoff if these goals were achieved? Would the new agency fund research that would be significantly different from that done by the current federal research agencies? Do the research communities needed to perform the research exist, and if not, how might they be created? What existing and future technological waves–information technology, microsystems, biotechnology, nanotechnology–would this new agency seek to harness?

Funding gaps

I do not mean to suggest that more R&D will solve all national and global problems. In almost all instances, R&D is likely to be just one component of a broader response to a given public policy challenge. But I do believe that R&D is underutilized as an instrument of national policy. One of the reasons for this is that the capacity to support research varies widely across the federal government. Policymakers and researchers take the current distribution of science and technology capabilities across the federal government as a given, and it rarely undergoes any serious consideration or discussion. Below are a few examples where we are not spending enough as a nation, although in all cases there is at least some level of effort.

Improving education and lifelong learning. A 1997 report by the President’s Council of Advisors on Science and Technology observed that we were investing less than 0.1 percent of total K-12 expenditures on R&D, as compared to the 23 percent R&D-to-sales ratio in the pharmaceutical industry. This is an area where the United States is clearly underinvesting, given the importance of education and lifelong learning to a well-functioning democracy and our long-run standard of living. As mentioned above, there has been little experimental research on K-12 education. A 2002 report by the Coalition for Evidence-Based Policy concluded that “existing practices have rarely or never been tested”; these include standards setting, whole-school reform, charter schools, math and science curricula, teacher training, and language instruction for students with limited English proficiency.

In addition, although the United States has made a large investment in providing schools with computers and Internet access, very little research has been done to advance the state of the art of educational technology and to understand how it might best be used to support teaching and learning. Possible goals identified by the researchers include developing software that approaches the effectiveness of a one-on-one tutor; creating interactive simulations that allow students to engage in learning by doing; embedding assessment in learning environments that can be used to continuously guide instruction; enabling students and subject-matter experts (as opposed to programmers) to rapidly create high-quality, reusable educational content; and constructing collaboration tools that allow for the sharing of expertise between peers, tutors, and experts. Many experts believe that the federal government has not supported grants with the necessary scale and scope. Educational technology research projects conducted in a lab or a single classroom may not shed any light on what it will take to move these innovations into everyday use in a large urban school district.

Although the National Science Foundation (NSF) has made some investments in these areas, the Department of Education has made almost no investment in educational technology R&D and has supported very few assessments of educational practices using random assignment. The administration’s proposed FY 2004 R&D budget for the Department of Education is $275 million, or only 0.2 percent of total federal R&D expenditures. The Department of Labor has no R&D budget. As such, it has done very little to explore the role that technology might play to allow adult learners to acquire skills more rapidly and conveniently.

Environmental technologies. There are many interesting ideas for technologies that could contribute to a cleaner environment and sustainable development through better monitoring and assessment, pollution minimization, and environmental remediation. Examples include “green chemistry,” designing for reuse and remanufacturing, and engineered microorganisms for remediation. Unfortunately, EPA, the agency that might be expected to take the lead on such an initiative, has almost no money for research in this area. Most of EPA’s modest research budget supports regulatory decisionmaking and the assessment of environmental and human health risks; little is left to support the development of technologies that would minimize pollution to begin with.

This clearly leads to some missed opportunities. EPA invests only $2 million in the $2.18 billion multiagency information technology initiative and $5 million in the $847 million nanotechnology initiative. At these levels of funding, EPA is unable to adequately support the many potential applications of nanotechnology that it has identified, such as filtration, remediation, environmentally benign manufacturing, and low-cost, highly selective sensors. For example, an EPA that was given an expanded R&D budget and charter could develop an ambitious strategy to put environmental monitoring on the Moore’s Law curve, leveraging recent advances in microfluidics, nanotechnology-based sensors, and wireless networks. This might allow individual communities to have continuous detailed information about air and water quality in their neighborhoods.

Science and technology for developing countries. Many believe that science and technology could play a much greater role in fostering economic and human development in developing countries. For example, according to the Global Forum for Health Research, only 10 percent of global medical research is devoted to diseases that cause 90 percent of the health burden in the world. Of the 1,233 drugs that reached the global market between 1975 and 1997, only 13 were for the treatment of tropical infectious diseases that primarily affect the poor in developing countries. More broadly, researchers could develop technologies that could help address a variety of challenges faced by developing countries and expand access to clean water, cleaner sources of energy, and affordable information and communications services. To date, however, developing countries have had a limited ability to support R&D, and development agencies such as USAID and the World Bank have not viewed supporting innovation as an important part of their mission.

In short, many potentially important areas of science and technology are not adequately funded. We have a “revolution in military affairs” but no “revolution in diplomatic affairs,” because the Defense Department has a $62.8 billion R&D budget and the State Department has none. We are not thinking systematically about the nonhealth applications of biology or “synthetic biology” because this is not the responsibility of NIH, the major government sponsor of biology. Agencies are embracing e-government to improve customer service and become more efficient, but no agency is sponsoring research and experimentation on “e-democracy”: the role of the Internet in expanding opportunities for informed participation and deliberative discourse.

What can be done

If the research community, foundations, and policymakers are interested in addressing this problem, there are several different possibilities worth exploring.

Universities and foundations could demonstrate what is possible. Although the federal government plays the dominant role in funding university-based research, universities and faculty members could identify areas of research that they believe are being underfunded; develop a compelling research agenda; and seek support from foundations, state governments, industry, and individual donors. With preliminary results in hand, they would be in a better position to demonstrate to Congress and the administration the potential benefits that might flow from increased investment in a particular area. For example, the University of California at Berkeley and other University of California campuses, with initial funding from the state of California, have launched the Center for Information Technology Research in the Interest of Society (CITRIS). This center is encouraging faculty to explore the use of advanced technologies to address a wide range of economic and societal challenges, many of which have not traditionally received much attention from the major federal research agencies.

Leadership might also come from foundations. Consider the critical role that the Rockefeller Foundation played in molecular biology beginning in the 1930s, the Whitaker Foundation’s critical investments in biomedical engineering, and the leadership that the Bill and Melinda Gates Foundation has demonstrated in global public health. Foundations looking for areas where they can make a unique contribution might choose to help “seed” an area of research that the federal government is neglecting. This is a potentially promising approach to explore a new area, but is probably not a substitute for federal investment over the long term.

NSF could be given the resources to significantly expand funding in important “gap” areas. NSF is the only agency with the mission to support science and engineering in all disciplines. In addition to evaluating proposals on the basis of their intellectual merit, it also considers the potential benefits to society of the proposed activity. NSF has a successful track record of working with other mission agencies to fund research that is motivated by specific national problems. The NSF/EPA Partnership for Environmental Research is a good example of the kind of collaboration that could be expanded with additional funding. There has been a great deal of support in Congress for doubling the NSF budget, but the current administration has rejected the proposal as being “formula-driven” and arbitrary. Identifying broad areas of use-inspired fundamental research that are currently being underfunded could help establish a more compelling case to increase the NSF budget.

There are several risks to this approach. The first is that NSF needs more money to do its core mission. The average size and duration of NSF grants are inadequate, particularly for multi-investigator awards, and NSF could easily absorb a doubling or tripling of its budget by addressing this problem. The second is that focusing the additional resources at NSF will not significantly change the culture of the relevant mission agency and may reduce opportunities for coordination between R&D and other public policies.

New research institutions could be created or the capability of an existing agency to support research could gradually be expanded. Policymakers could also consider creating new research agencies. A National Institute of Learning, for example, could fund research on cognitive science, educational psychology, advanced learning technologies, and large-scale rigorous evaluations of different proposals for K-12 reform. Another approach would be to increase the funding and capacity of an existing agency to support research. This might involve appointing or elevating the position of the chief scientist; using the National Academies to identify potentially relevant research issues; aggressively using the Intergovernmental Personnel Act to recruit researchers from universities and national labs; establishing external advisory committees; determining where an incremental investment might allow the agency to leverage science and technology funded by industry and other federal research agencies; and gradually increasing the agency’s R&D budget as it demonstrates the ability to successfully manage a high-quality extramural research program.

Science, technology, and innovation could be making a bigger contribution to a broader range of national and global goals. We should stop treating the current funding structure for research as a given and begin experimenting with different ways in which science and technology could help address our most pressing challenges.

Reinvigorating Genetically Modified Crops

In August 2002, government officials in the United States were shocked when Zambia, which was on the verge of a major food crisis, began to refuse the import of free U.S. corn as food aid, because some of that corn might be “genetically modified” (GM). This was the same corn that U.S. citizens had been consuming and that the United Nations World Food Programme had been distributing in Africa–including Zambia–since 1996. In short order, three other countries facing possible famine in the region–Zimbabwe, Mozambique, and Malawi–also decided to reject U.S. corn as food aid unless the corn was milled to prevent it from being planted. As a reason for their refusals, Zambia and the other countries cited the fear that if any GM corn imported as food aid was planted by farmers instead, they would lose their current status as “GM-free” countries, as designated by importers in the European Union (EU). This loss, the governments worried, would compromise their ability in the future to export crops and foods to countries in the EU, where GM products are unpopular and more tightly regulated.

The United Nations was able to replace most of the rejected corn shipments to Zambia with non-GM corn from Tanzania and South Africa, but not before hardships in the country increased. As but one sign of such hardship, in January 2003 some 6,000 hungry people in one rural town overpowered an armed police officer to loot a storehouse filled with U.S. corn that they heard the government was soon going to insist had to be taken out of the country.

These events persuaded trade officials in Washington that the more restrictive GM-food policies that had taken hold in Europe were now a threat not just to U.S. commercial farm commodity sales but also to the efficient international movement of food aid for famine relief. The United States renewed calls for the EU to relax its regulatory and import restrictions on GM crops and foods. U.S. officials pointed out that scientists in Europe had been unable to find any evidence of added risk to human health or the environment from any GM crop variety developed to date. For example, in 2001 the EU Commission for Health and Consumer Affairs released a summary of 81 scientific studies of GM foods conducted over a 15-year period. None of the studies–all financed by the EU, not private industry–found any scientific evidence of added harm to humans or the environment. In December 2002, the French Academies of Sciences and Medicine drew a similar conclusion in a report that said that “there has not been a health problem–or damage to the environment” from GM crops. This report blamed the rejection and over-regulation of GM technologies in Europe on what it called a “propagation of erroneous information.”

Early in 2003, the United States moved closer to bringing to the World Trade Organization (WTO) a formal challenge of EU regulations regarding GM foods and crops, particularly a five-year EU moratorium on new GM crop approvals. But the immediate chance of success is poor. Although the United States has a solid scientific and legal case, the political and commercial foundation for challenging the EU on these products is currently quite weak. Even if the United States wins such a challenge, the likely result will be no change in EU regulations and a continued spread, into the developing world, of highly precautionary EU-style regulations on GM foods and crops.

This outcome would not necessarily be a calamity for U.S. farmers. It might only mean turning the clock back to 1995, the “birthday” of the first GM crop, and returning to the use of non-GM seeds by corn and soybean farmers. U.S. farm income likely would dip slightly, because production costs would increase as the need for insecticides and herbicides increased, but farmers would adapt. The situation would be more problematic for the big corporations that originally developed GM crops, as they would lose expected returns on their major investments in research and development.

But should Europe’s precautionary principle spread internationally, the biggest losers of all will be poor farmers in the developing world. If this new technology is killed in the cradle, these farmers could miss a chance to escape the low farm productivity that is helping to keep them in poverty.

Poor farmers in tropical countries are facing unsolved problems from crop pests, crop disease, low soil fertility, and drought. This is one reason per capita food production in Africa has been declining for the past 30 years. Since 1975, the number of malnourished children in Africa has more than doubled, reaching 30 million. Fifty million Africans suffer from vitamin A deficiency, and 65 percent of African women of childbearing age are anemic. Two-thirds of all poor and poorly fed Africans are farmers; for them, increased farm productivity would be the best escape from poverty and hunger. GM technologies hold considerable promise for these people. Maize farmers in Kenya, who now lose 45 percent of their crop to stem borers, could be helped if given the chance to plant currently available GM maize. Cowpea farmers in Cameroon, who lose more than half of their crop to pod borers and weevils, could be helped if given the chance to grow GM cowpeas. Moreover, it might soon be possible to use new recombinant DNA techniques to provide these farmers with even more desperately needed drought-resistant or nitrogen-fixing food crops.

In light of the promise of GM crops, the United States should modify its approach to dealing with current EU restrictions. A number of steps may help reverse the regulatory tide that now is preventing GM food and feed crops from reaching the poor farmers in greatest need.

Hurdles to growth

Today, 99 percent of the world’s plantings of GM food and feed crops are restricted to just four countries–the United States, Canada, Argentina, and (illegally) Brazil. This reflects, more than anything else, a globalization of Europe’s highly precautionary regulatory approach toward GM technology. There are four primary channels of influence through which this European triumph is now taking place: intergovernmental organizations (IGOs), development assistance, nongovernmental organizations (NGOs), and international food and commodity markets.

Intergovernmental organizations. It is not surprising that European influence dominates within most of the IGOs that deal with GM foods and crops. European governments work hard to maintain and develop their influence within IGOs, whereas the U.S. government too often ignores or disrespects these groups–by failing to pay dues on time, failing to send high-ranking delegations to IGO meetings, or failing to ratify agreements. Thus IGOs that should be promoting GM crops are not doing so, and IGOs that are regulating GM crops are doing so in the manner Europeans prefer. For example, the United Nations Food and Agriculture Organization (FAO), the Consultative Group on International Agricultural Research (CGIAR), and the World Bank should be promoting GM crops because these organizations are production-oriented and pro-technology, as well as traditionally supportive of the United States. But U.S. financial support for and diplomatic attention to these bodies has weakened in the past decade, so in the current climate of European misgivings toward GM crops, the organizations have all backed away from promoting such technologies.

A first step is to restore the level of U.S. assistance to international organizations devoted to agricultural research and development.

The FAO now mostly provides advice on how to regulate GM technologies, not on how to shape their development or promote their use. The FAO’s director general has even stated publicly that GM foods are not needed to meet the UN’s objective of alleviating world hunger by 2015. The United States, which has been delinquent in paying its dues to the FAO, has lost influence within the organization. At an FAO summit in 2002, U.S. officials pressed for an endorsement of GM crops, but the best the organization could come up with was an endorsement of what it called “new technologies including biotechnology.” By using the word biotechnology instead of referring specifically to genetic modification, the FAO signaled its discomfort with GM crops.

The CGIAR, whose stated goals include “promoting cutting-edge science to help reduce hunger and poverty,” also has backed away from GM technologies. It is true that one of the group’s units, the International Rice Research Institute (IRRI) in the Philippines, is supposed to be developing “golden rice,” a GM crop that may offer benefits to growers and consumers alike. But IRRI recently decided not to conduct any field trials of any GM rice in the Philippines, for fear that they would stir up the anger of local NGOs opposed to GM crops, and only two of the institute’s 800 scientists are still working on this project. Indeed, the current CGIAR annual report makes only one reference to GM crops–and even this refers to the regulation of possible biosafety threats from such crops, not to possible benefits. Such reticence in championing this new technology is again no surprise, given that European financial contributions to the CGIAR are now twice as large as contributions from North America.

Paralysis has struck the World Bank as well. Several years ago, the Bank set out to draft a strategy document on GM crops, but political opposition at the highest levels prevented the document–as bland as it was–from ever gaining official approval. The Bank’s current strategy is not to promote GM crops, but to study them. In late summer of 2002, the World Bank announced a three-year global consultation process to examine the “possible benefits” of this new technology and also the alleged drawbacks–an approach designed to avoid criticism from Europe.

Even as the IGOs that should be promoting GM crops are failing to do so, a number of equally powerful IGOs are taking a distinctly European approach to regulating this new technology. For example, the United Nations Environment Programme (UNEP) has now redirected some of its funding to helping developing countries draft precautionary biosafety regulations for GM crops. UNEP wants such regulations to be in place before these countries begin any planting of GM crops, no matter the delay or administrative cost this might entail. UNEP also helped to negotiate the new Cartagena Biosafety Protocol, developed in 2000 as part of the Convention on Biological Diversity (CBD). This protocol explicitly endorses a “precautionary approach” to GM technologies and allows governments to limit imports of living GM crops and seeds, even without scientific demonstration of a specific risk to the environment. U.S. influence over the negotiations was diminished because the Senate had never ratified the original CBD. Thus U.S. officials could participate in the protocol negotiations only as “observers”–not a good way to influence the outcome.

Development assistance. The United States at one time financed development assistance generously, in part hoping to influence policies in poor countries, but such assistance programs have withered since the end of the cold war. This is particularly true for agriculture. Between 1992 and 1999, support for agricultural development from the U.S. Agency for International Development (USAID) fell by more than 50 percent. In Africa, the United States largely withdrew from such work, closing USAID missions and sending agency personnel home. Meanwhile, European donors remained very much on the scene, ready to advise African governments on how to regulate GM crops. The Dutch, Danes, and Germans remained active, consistently advocating for ratification of the Cartagena Biosafety Protocol and formal adoption of the precautionary principle. Developing countries were warned not to plant GM crops until precautionary biosafety screening procedures were fully in place.

This influence has fostered the import of European-style regulatory systems, even though the capacity of many developing nations to implement such complex biosafety screening procedures is often relatively low. The practical result has been regulatory paralysis. Once complex biosafety screening requirements have been written into the laws of poor countries, cautious politicians and bureaucrats discover that the safest thing, politically, is to approve no GM crops at all. Approving nothing is the best way to conceal a weak technical capacity to screen GM technologies on a case-by-case basis, and it is a good way to avoid criticism from outside groups and the media.

Indeed, in all of Africa, not a single country other than South Africa has yet approved any GM crops for commercial planting. In all of developing Asia, only the Philippines has approved the planting of any major GM food or feed crop–no rice, no soybeans, no corn. Only one industrial crop–called Bt cotton–has been approved, and that only in a few Asian countries. This variety has been modified to contain a gene from the bacterium Bacillus thuringiensis, or Bt, that produces a toxin that kills bollworms, the crop’s major pest.

Nongovernmental organizations. European-based environmental and anti-globalization NGOs have invested heavily in efforts to block GM food technologies. These groups were instrumental in prompting the EU in 1998 to impose a moratorium on new GM crop approvals, and they now are working to prevent approvals in the developing world. Greenpeace, for example, has invested $7 million to stop genetic engineering, focusing particularly on developing countries that have not yet approved any GM crops.

Of course, private biotechnology companies–such as Monsanto, the U.S.-based firm that developed many of the first GM crops–spend a lot more than this to promote the spread of GM crops. But NGOs go beyond paid media campaigns; they also employ direct actions, street protests, and lawsuits to generate free media attention. Lawsuits, in particular, have emerged as a proven method for delaying the introduction of GM crops in poor countries. In 1998, Monsanto thought it had won official approval in Brazil for five varieties of Roundup Ready soybeans, which can withstand applications of the herbicide Roundup used to kill troublesome weeds in soybean fields. But a local consumer NGO and the Brazilian office of Greenpeace filed a lawsuit and found a sympathetic federal court judge to issue an injunction that stopped the approval. This case remains caught up in the Brazilian court system, so planting GM seeds in the country remains illegal, even though farmers there have been eager to do so–and, indeed, are illegally smuggling in GM seeds from Argentina.

NGOs also were instrumental in convincing Zambia not to accept GM products for humanitarian food relief, even though 2.5 million Zambians were hungry and at risk of famine. Political leaders were frightened away in part by campaigns conducted by such groups as Action Aid, from the United Kingdom, and the Dutch branch of Friends of the Earth. Following its initial refusal of U.S. corn, Zambia reaffirmed the import ban on GM food aid in November 2002, after a team of government experts consulted with a number of outside organizations–notable among them European NGOs deeply opposed to GM technologies. They also heard negative views from the British Medical Association, which had no evidence of any added health risks from GM foods but was clinging to a position it had taken three years earlier that the technology had not yet been sufficiently tested for all hypothetical risks.

Frustrated with this crisis, U.S. officials late in 2002 asked the European Union and the World Health Organization (WHO) to reassure officials in Zambia that there was no scientific evidence of risk from the corn being offered and to remind the Zambians that even EU regulators had given food-safety approvals to several varieties of GM corn and soybeans. But the EU responded at first by claiming this was a matter between the United States and Zambia. The WHO also disappointed the United States when its director-general, addressing a group of health ministers from southern Africa, said that the GM corn was “not likely” to present a risk.

International markets. Many observers originally assumed that once the United States began growing GM food and feed products, the technology would quickly become pervasive. The United States is the world’s biggest exporter of agricultural goods, so these products would have to be accepted worldwide. That was the wrong way to look at the matter. In international commodity markets, the big importers, not the big exporters, usually set standards–and the biggest importers are Europe and Japan, which together import $90 billion in agricultural products annually. Europe imports 75 percent more food and farm products from developing countries every year than the United States does. Accordingly, developing countries that aspire to export farm products must pay close attention to European consumer preferences and import regulations.

If it were only consumer opinion in Europe that mattered, then nobody could really complain, since that would be a free market outcome. But this is not the case: European consumers have shown that they are willing to pay small premiums in the market for GM-free food, but only small premiums. Rather, the problem is the increasing variety of EU regulations and policy actions that are keeping GM products from the shelf. It seems that European officials, having under-regulated mad cow disease, dioxin contamination, and hoof-and-mouth disease, are hoping to restore their credibility, in part, by over-regulating GM foods.

One major concern centers on the EU’s plans to implement, by late 2003, new regulations covering the tracing and labeling of GM products. These regulations go beyond the informal EU moratorium on new GM biosafety approvals, implemented in 1998, a policy that has required an import ban on any bulk commodity shipments from the United States that possibly might contain unapproved GM varieties. The EU says that it is enacting the traceability and labeling regulations in order to facilitate lifting the approval moratorium, arguing that EU consumers, political leaders, and anti-GM activists may be less likely to object to new GM crop approvals if such rules are in place. But under the rules, any developing country hoping to export farm products to Europe may be compelled to remain GM-free.

Mandatory GM labeling will now apply not only to human foods but also to animal feed, and even to processed products where there is no longer any physically detectable GM content. Even more troublesome will be the traceability regulation, which will oblige every operator in the food chain to maintain a legal audit trail for all GM products, recording where they came from and where they went. The intent is to facilitate enforcement of the new labeling rule and also to allow quick removal of any product that might prove to be unsafe. In Europe, where farmers do not grow GM crops, this will mostly be a paperwork exercise. But for GM-crop-producing countries such as the United States, the regulation will force exporters to start segregating GM from non-GM corn and soybean products throughout the food chain, from farm to fork. At the very low threshold of contamination to be permitted under the EU regulations (shipments must be 99.1 percent free of any GM content to escape being labeled and covered under the regulation), this segregation process will raise the export price of the U.S. corn and soybean shipments in question. Fearing either loss of access to Europe or loss of competitiveness, U.S. farmers might eventually have to retreat from planting GM seeds.

U.S. trade officials argued for a softening of these new regulations, hoping to reduce their likely impact on U.S. exports. They asked that the threshold of permitted contamination be raised and that labels be required only if some GM content is physically detectable. They also called for a provision that labels need only state that products or shipments “may contain” certain genetically modified crops, rather than having to say exactly which GM crops they contain. But these requests found little support when the EU Councils of Agricultural and Environmental Ministers approved the new regulations late in 2002. Only the United Kingdom appeared ready to call for any weakening of the regulation. At the other extreme, the French wanted labels not only on all processed GM foods but also on meat from animals that have been fed GM crops, and even on pet foods that contain GM ingredients.

The United States believes that both the moratorium on new approvals of GM products and the traceability and labeling regulations violate several WTO agreements. The moratorium, for example, violates the Sanitary and Phytosanitary agreement, which calls for all such regulatory actions to be based on firm assessments of scientific risk–and even EU officials admit that there is no scientific evidence to justify the moratorium. The United States has threatened to challenge the moratorium in the WTO for the past four years, and in January 2003 the nation’s chief trade representative announced that he personally favored bringing such a case in the near future. But since the new regulations have not yet taken effect, no formal challenge is possible in the WTO at the moment.

If U.S. officials do pursue WTO cases against these regulations, the result will probably be more frustration. The United States could win legally but lose politically and commercially. The WTO can be useful at times for pressuring the EU into scaling back trade distortions linked to traditional farm subsidies. But for scaling back trade impediments linked to consumer food safety fears, the WTO is a weak instrument. U.S. officials have already experienced the limits of the WTO’s dispute-settlement powers in recent cases regarding hormone-treated beef. The United States won (twice) in the WTO but still failed to open the EU market to such beef. The hormone ban was popular with consumers in Europe and had been endorsed by the European Parliament, so the EU decided to comply with WTO rules by paying the United States an annual fine rather than lifting the ban. Therefore, even if the United States were to prevail in challenging the GM-food regulations, there is little guarantee that U.S. food sales to Europe would increase. The EU might simply decide to pay a fine, or consumers might still shun GM foods.

It seems almost certain, then, that the regulatory movement in Europe toward tighter restrictions on GM imports will continue, and agricultural exporters worldwide know this. Indeed, fear of losing exports to Europe already is slowing down this technology even in some strongly pro-GM countries, such as Argentina. Since 1998, Argentina has made it a policy not to approve any new GM varieties until they have been approved for import into the EU. China, another early GM enthusiast, has similarly decided to hold back on approving the commercial planting of GM maize, soybeans, or rice. China began its slowdown after importers in the EU turned away a shipment of soy sauce made in Shanghai from U.S. soybeans that might have had a GM origin. Even the United States and Canada are slowing development of some new GM crops, such as GM wheat, for fear of losing export sales of wheat or flour.

Steps for U.S. action

U.S. officials have so far struggled to find useful short-term options for slowing the spread of EU-style regulations to the developing world. The best approach now will be to put aside past failures and to adopt a longer-term strategy that rests on a fundamental belief in the ultimate promise of genetically modified crops.

One such approach would be to begin investing more public money in the development of GM technologies specifically tailored to the needs of poor farmers in tropical countries. The United States made a mistake in the 1980s when it began skimping on public investments in the development of GM crops and entrusted this job so completely to the private sector. U.S. companies responded by developing a first generation of GM crops designed mostly to please wealthy commercial farmers in the United States and other temperate zone regions.

It was thus relatively easy for anti-GM activists in Europe and elsewhere to oppose the first GM products, such as Monsanto’s herbicide-tolerant soybeans. The corporate motive (in Monsanto’s case, to facilitate sales of its Roundup herbicide) was transparent, and few poor farmers in the semiarid tropics can grow soybeans. It would have been far more difficult for activists to build a political consensus against GM crops if the first products had included an insect-resistant or disease-resistant variety of cassava or cowpea. But private companies have few incentives to produce such varieties for poor farmers, so this is a job that must be done through public research institutions and international assistance agencies. This kind of public leadership was key to ushering in the Green Revolution of the 1960s and 1970s, which brought high-yielding wheat and rice varieties to poor farmers throughout Asia. To hope that a comparably successful Gene Revolution could be launched in poor countries without stronger public leadership was a mistake.

It also may prove fruitful, facing a momentary blockage of GM food and feed crops, to refocus on the spread of a key industrial GM crop–Bt cotton. Farmers in South Africa, China, Indonesia, and India have all been allowed by national biosafety regulators to plant Bt cotton, and all report satisfying results in terms of lowered production costs, less bollworm damage, and reduced occupational exposure to hazardous insecticide sprays. With luck, the continued success of Bt cotton in these countries will eventually whet the appetite of their policymakers for comparable GM gains in the food and feed crop sectors. Should this happen, national biosafety regulators may then feel more confident in their ability to approve GM corn, rice, and other staple crops.

U.S. officials need to become more adept at giving voice to the expressed needs and desires of farmers, scientists, and officials from the developing world.

Approval will be even more likely if, by then, the GM varieties under consideration have emerged from publicly funded national or international scientific institutes, rather than from corporations in the United States or other industrial nations. National research institutes in China and India are currently developing precisely such technologies. At some point, the potential domestic farm productivity gains from approving GM food and feed crops in these countries will outweigh possible loss of future export sales. What practical steps might the U.S. government take in the meantime to preserve space for a future GM crop revolution in the developing world? A first step is to restore the level of U.S. assistance to international organizations devoted to agricultural research and development. Governments in the developing world will pay more attention to the U.S. position on GM crops if it is perceived as part of a more generous overall development assistance posture. In March 2002, President George W. Bush proposed a 50 percent increase–an additional $5 billion–in annual U.S. core assistance to developing countries, to be administered through a Millennium Challenge Account. Later that year, the USAID issued a report that proclaimed “getting agriculture moving” as one of its key interests–and added plainly that “U.S. leadership can help in restoring budgets of the agricultural research system.” If Congress heeds this message in 2003, and if U.S. contributions to international agricultural development and research are indeed restored, then U.S. influence over the range of technological choices open to developing countries can perhaps be restored as well.

In addition, U.S. officials need to become more adept at giving voice to the expressed needs and desires of farmers, scientists, and officials from the developing world. If presented from a U.S. perspective only, the international case for GM crops is inherently hard to make. If instead the views and positions of struggling farmers in the developing world are presented, the case becomes difficult to deny. Small farmers growing GM cotton in China, India, and South Africa already are testifying to the great benefit this crop provides in terms of added income and improved occupational and environmental safety. This testimony needs to be heard. So, too, does the testimony of agricultural scientists in stakeholder countries. Professor James Ochanda, a Kenyan biotechnologist and chair of the African Biotechnology Stakeholders Forum, told a European audience in January 2003: “We do not want to be a pawn in the transatlantic trade squabble. We have our own voice and want to make our own decisions on how to use this new technology.”

When Children Die

The death of a child is a special sorrow, an enduring loss for surviving mothers, fathers, brothers, sisters, other family members, and close friends. No matter the circumstances, a child’s death is a life-altering experience. In 1999, approximately 55,000 children ages 0 to 19 died in the United States.

Fortunately, in the United States and other developed countries, death in childhood is no longer common. Many infants who once would have died from prematurity, complications of childbirth, and congenital anomalies (birth defects) now survive. Likewise, children who would have perished in the past from an array of childhood infections today live healthy and long lives, thanks to sanitation improvements, vaccines, and antibiotics.

In the course of a century, the proportion of all deaths in the United States occurring in children under age 5 dropped from 30 percent in 1900 to just 1.4 percent in 1999. Infant mortality dropped from approximately 100 deaths per 1,000 live births in 1915 to 7.1 per 1,000 in 1999.

Because adults account for most deaths in the United States, programs of palliative and end-of-life care understandably focus on adults, especially older adults who account for over 70 percent of deaths each year. Health care professionals and others are, however, increasingly responding to the special needs of gravely ill or injured children and their families, including palliative and end-of-life care that reflects differences in the major causes of death. A comprehensive study of this issue can be found in the Institute of Medicine report When Children Die: Improving Palliative and End-of-Life Care for Children and their Families

Greatest risk at birth

About half of all child deaths occur during infancy, and most of these deaths occur soon after birth.

Percentage of Total Childhood Deaths by Age Group (1999)

Accidents become important later

Reflecting the concentration of child deaths among infants, an array of newborn and infant conditions are among the leading causes of child mortality. Another large fraction of child deaths–about 30 percent–is accounted for by unintentional and intentional injuries.

Percentage of Total Childhood Deaths by Cause (1999)

Causes of death differ with age

Patterns of child mortality differ considerably from patterns for adults, especially elderly adults who die primarily from chronic conditions such as heart disease and cancer. Among newborns, most deaths are due to congenital abnormalities or complications associated with prematurity, pregnancy, or childbirth. For older infants, sudden infant death syndrome (SIDS) is an important cause of death. Fatal injuries dominate among children and adolescents.

Numbers of Deaths by Cause and Age Group (1999)

Age Group (years)
Infant (<1) 1-4 5-14 15-24
Congenital anomalies Accidents Accidents Accidents
5,473 1,898 3,091 13,656
Short gestation and low birth weight Congenital anomalies Malignant neoplasms Homicide
4,392 549 1,012 4,998
SIDS Malignant neoplasms Homicide Suicide
2,648 418 432 3,901
Complications of pregnancy Homicide Congenital anomalies Malignant neoplasms
1,399 376 428 1,724
Respiratory distress syndrome Diseases of the heart Diseases of the heart Diseases of the heart
1,110 183 277 1,069

Regional differences

Reflecting social, economic, physical, and other differences, states and regions show considerable variation in child mortality by cause. Variation at the state level is more dramatic. In 1999, the District of Colombia had the highest infant mortality rate (15.0 per 1,000 live births), followed by South Carolina (10.2 per 1,000 live births). Maine and Utah had the lowest rate at 4.8 deaths per 1,000 live births. In the same year, for those aged 0 to 19, Wyoming led the nation in motor vehicle fatality rates (23.5 per 100,000), followed by Mississippi (20.9 per 100,000). The lowest fatality rates were for Hawaii (3.6 per 100,000) and Rhode Island (3.8 per 100,000). Juvenile homicide rates also differ substantially among states. Maryland led the nation in 1999 with a homicide rate of 7.8 per 100,000, followed by Illinois at 7.25 per 100,000. Hawaii and Utah had the lowest rates at 0.6 and 0.75 per 100,000, respectively. Juvenile homicides are concentrated among males in impoverished areas of large urban counties.

Death Rates for Selected Causes by Geographic Region (1999)

Gender, race, and ethnicity also matter

Demographic variation is also significant. Across all age ranges and for most causes of deaths, boys have a higher death rate than girls. At all ages, the death rates for black children is higher than for white or Hispanic children.

Deaths Due to Injury Compared to Other Conditions, by Age and Race (1999)

Age
(years)
 Injury Rate
per 100,000 (number)
Other Conditions Rate
per 100,000 (number)
 Black White Black/White
Ratio
Black White Black/White
Ratio
1-4  27.4 (609) 13.4 (1,605) 2.0 30.6 (693) 16.1 (1,936) 1.9
5-9  14.8 (465) 7.2 (1,129) 2.1 13.2 (418) 8.2 (1,278) 1.6
10-14  13.8 (426) 10.7 (1,650) 1.3 13.9 (416) 9.1 (1,385) 1.5
15-19  69.6 (2,119) 51.2 (8,009) 1.4 22.9 (692) 13.9 (2,156) 1.6

Source data for all tables and figures: National Center for Health Statistics

Controlling Dangerous Pathogens

Remarkable advances are underway in the biological sciences. One can credibly imagine the eradication of a number of known diseases, but also the deliberate or inadvertent creation of new disease agents that are dramatically more dangerous than those that currently exist. Depending on how the same basic knowledge is applied, millions of lives might be enhanced, saved, degraded, or lost.

Unfortunately, this ability to alter basic life processes is not matched by a corresponding ability to understand or manage the potentially negative consequences of such research. At the moment, there is very little organized protection against the deliberate diversion of science to malicious purposes. There is even less protection against the problem of inadvertence, of legitimate scientists initiating chains of consequence they cannot visualize and did not intend.

Current regulation of advanced biology in the United States is concerned primarily with controlling access to dangerous pathogens. Only very limited efforts have been made thus far to consider the potential implications of proposed research projects before they are undertaken. Instead, attention is increasingly being directed toward security classification and expanded biodefense efforts to deal with concerns about the misuse of science for hostile purposes. Few U.S. officials appear to recognize the global scope of the microbiological research community, and thus the global nature of the threat. We believe that more systematic protection, based on internationally agreed rules, is necessary to prevent destructive applications of the biological sciences, and we have worked with colleagues to develop one possible approach.

The emerging threat

Shortly after the September 11, 2001, terrorist attacks, envelopes containing relatively pure, highly concentrated Bacillus anthracis powder were mailed to several prominent U.S. media outlets and politicians. After years of warnings, anthrax had been unleashed in a bioterrorist attack on U.S. soil. In the end, 5 people died and 17 were injured. An estimated 32,000 people were given antibiotics prophylactically, with some 10,300 of those being urged to continue treatment for 60 days. Although adherence to the full treatment regimen was poor, the prompt initiation of antibiotics may have prevented hundreds if not thousands of others from dying or becoming ill. What would have happened if a more sophisticated delivery system or an antibiotic-resistant strain of anthrax had been used instead?

Biological weapons experts have debated for years whether the biotechnology revolution would lead to the development of new types of biological agents that were more lethal, more difficult to detect, or harder to treat. Some believed that there was little advantage in trying to improve the wide range of highly dangerous pathogens already available in nature. Beginning in the late 1980s, however, reports from defectors and other former Soviet biological weapons scientists proved this notion to be false. According to these sources, under the Soviet offensive program, Legionella bacteria were genetically engineered to produce myelin, resulting in an autoimmune disease with a mortality rate in animals of nearly 100 percent. In another project, Venezuelan equine encephalomyelitis genes were inserted into vaccinia (the vaccine strain of smallpox) reportedly as part of an effort to create new combination agents known as “chimeras.” In yet another project, genes from a bacterium that causes food poisoning, Bacillus cereus, were introduced into Bacillus anthracis, producing a more virulent strain of anthrax that even killed hamsters that had been vaccinated against the disease.

One need not look only to the former Soviet program for examples of how advances in the biological sciences could be deliberately or inadvertently misused for destructive applications. Research with possible destructive consequences is also being carried out in the civilian biomedical and agricultural community, both in universities and private-sector laboratories. Perhaps the most famous example is the mousepox experiment, in which Australian researchers trying to develop a means of controlling the mouse population inserted an interleukin-4 (IL-4) gene into the mousepox virus and in so doing created a pathogen that was lethal even to mice vaccinated against the disease. This work immediately raised the question of whether the introduction of IL-4 into other orthopox viruses, such as smallpox, would have similarly lethal effects. It also drew attention to the absence of internationally agreed rules on how to handle research results that could be misused. After publication of the research in February 2001, Ian Ramshaw, one of the principal investigators, called for the creation of an international committee to provide advice to scientists whose research produces unexpectedly dangerous results.

Other research projects since that time have been equally controversial. In one Department of Defense (DOD)-funded study, published in Science in July 2002, researchers from the State University of New York at Stony Brook created an infectious poliovirus from scratch by using genomic information available on the Internet and custom-made DNA material purchased through the mail. Members of Congress responded with a resolution criticizing the journal for publishing what was described as a blueprint for terrorists to create pathogens for use against Americans and calling on the executive branch to review existing policies regarding the classification and publication of federally funded research. Craig Venter of the private human genome project described the poliovirus work as “irresponsible” and, with University of Pennsylvania ethicist Arthur Caplan, called for new mechanisms to review and approve similar projects before they are carried out. A few months later, Venter and Nobel laureate Hamilton O. Smith announced their own rather provocative research goal: the creation of a novel organism with the minimum number of genes necessary to sustain life. Although the researchers emphasized that the organism would be deliberately engineered to prevent it from causing disease in humans or surviving outside of a laboratory dish, they acknowledged that others could use the same techniques to create new types of biological warfare agents.

In another project, University of Pennsylvania researchers, using previously published data on smallpox DNA, reverse-engineered a smallpox protein from vaccinia and then showed how smallpox evades the human immune system. The research, published in June 2002, raised the question of whether the same protein could be used to make other orthopox viruses such as vaccinia more lethal. In an unusual move, the article was accompanied by a commentary defending publication and arguing that it was more likely to stimulate advances in vaccines or viral therapy than to threaten security.

U.S. actions to reduce the likelihood that biological advances will be used for destructive purposes fall short in a number of important aspects.

Researchers have also begun to discuss the implications of the progress made in recent years in sequencing the genome of the virus responsible for the 1918 influenza pandemic. In 1997, researchers at the Armed Forces Institute of Pathology succeeded in recovering fragments of the virus from preserved tissue samples. Already, several of the eight segments of the virus genome have been sequenced and published. Once the complete sequence is obtained, it may be possible to use reverse genetics to recreate the deadly virus, which is estimated to have killed as many as 40 million people in a single year.

Other, more future-oriented research is also of concern. Steven Block, who led a 1997 study for the U.S. government on next-generation biological weapons, has called attention to the possibility of gene therapy being subverted to introduce pathogenic sequences into humans, or of new zoonotic agents being developed that move from animals to humans. Both Block and George Poste, who chairs a DOD panel on biological weapons threats, have also noted the possibility of stealth viruses that could be introduced into a victim but not activated until later and of designer diseases that could disrupt critical body functions.

New restrictions

Thus far, the U.S. response to these developments has had a distinctly national focus. Less than a month after the first anthrax death, Congress enacted legislation aimed at tightening access to pathogens and other dual-use biological materials within the United States. Under the USA Patriot Act, signed into law on October 26, 2001, it is now a crime for anyone to knowingly possess any biological agent, toxin, or delivery system that is not reasonably justified for prophylactic, protective bona fide research or other peaceful purposes. The bill also makes it a crime for certain restricted persons, including illegal aliens and individuals from terrorist list countries, to possess, transport, or receive any of the threat agents on the Centers for Disease Control and Prevention’s (CDC’s) “select agent” list. The American Society for Microbiology (ASM) and others have criticized the restricted-persons provision, arguing that the absence of waiver authority could preclude legitimate researchers from restricted countries from undertaking work that could benefit the United States.

Other bioterrorism legislation passed in May 2002 requires any person who possesses, uses, or transfers a select agent to register with the secretary of Health and Human Services (HHS) and to adhere to safety and security requirements commensurate with the degree of risk that each agent poses to public health. The law requires a government background check for anyone who is to be given access to select agents. In addition, HHS is required to develop a national database of registered persons and the select agents they possess, including strain and other characterizing information if available, and to carry out inspections of relevant facilities. The Department of Agriculture (USDA) is required to develop parallel registration, security, record-keeping, and inspection measures for facilities that transfer or possess certain plant and animal pathogens. These new controls build on legislation adopted in 1996, after the Oklahoma City bombings and the acquisition of plague cultures by a member of the Aryan Nation, requiring any person involved in the transfer of a select agent to register with HHS and notify it of all proposed transfers.

In another move, seemingly at odds with the greatly expanded effort to control access to dangerous pathogens, the government has dramatically increased research funding related to biological warfare agents. In March 2002, the National Institutes of Health (NIH) announced a $1.7 billion fiscal year 2003 bioterrorism research program, a 2,000 percent increase over pre-September 11 budget levels. Under the program, some $440 million is to be spent on basic research, including genomic sequencing and proteomic analysis of up to 25 pathogens, and $520 million is to be used for new high-containment and maximum-containment laboratories and regional centers for bioterrorism training and research. In his 2003 State of the Union message, President Bush proposed to spend an additional $6 billion over 10 years to develop and quickly make available biological warfare agent vaccines and treatments under a new HHS-Department of Homeland Security program called Project Bioshield. The Department of Energy (DOE) has also been increasing its bioterrorism research program, which was first begun in 1997. As part of this effort, DOE is funding research aimed at determining the complete genetic sequence of anthrax and other potential biological warfare agents and comparing agent strains and species using DNA information. Other DOE studies are using genetic sequencing to identify genes that influence virulence and antibiotic resistance in anthrax and plague and to determine the structure of the lethal toxins produced by botulinum and other biological agents that can be used against humans.

Against this backdrop of increased research, the United States is also exploring possible restrictions on the dissemination of scientific findings that could have national security implications–what has been called “sensitive but unclassified” information. Since the Reagan administration, U.S. policy on this issue has been enshrined in National Security Decision Directive (NSDD) 189, which states: “…to the maximum extent possible, the products of fundamental research [should] remain unrestricted…where the national security requires control, the mechanism for control of information generated during federally funded fundamental research in science, technology and engineering…. is classification.” National Security Advisor Condoleezza Rice affirmed the administration’s commitment to NSDD 189 in a November 2001 letter.

But in a memorandum to federal agencies in March 2002, White House Chief of Staff Andrew Card raised the need to protect sensitive but unclassified information. At the same time, the Pentagon circulated a draft directive containing proposals for new categories of controlled information and for prepublication review of certain DOD-funded research. Because of strong criticism from the scientific community, the draft was withdrawn. Last fall, however, the White House Office of Management and Budget began developing rules for the “discussion and publication” of information that could have national security implications. These rules, which were reportedly requested by Homeland Security chief Tom Ridge, are expected to apply to research conducted by government scientists and contractors but not, at least initially, to federally funded research grants. This has not assuaged the concerns of the 42,000-member ASM, which in July 2002 sent a letter to the National Academies asking it to convene a meeting with journal publishers to explore measures the journals could implement voluntarily as an alternative to government regulation. This meeting, which was held in January 2003, laid the groundwork for a subsequent decision by 30 journal editors and scientists to support the development of new processes for considering the national security implications of proposed manuscripts and, where necessary, to modify or refrain from publishing papers whose potential harm outweighs their potential societal benefits.

In a surprising move, the government has also taken a very modest step toward strengthening the oversight process for biotechnology research in the United States. Under the new HHS regulations to implement the May 2002 controls on the possession, transfer, and use of select agents, the HHS secretary must approve genetic engineering experiments that could make a select agent resistant to known drugs or otherwise more lethal. The new USDA regulations appear to be even broader, in that they seem to apply to any microorganism or toxin, not just to those on the USDA control list. The latter provision mirrors the current requirements of the NIH Guidelines, under which biotechnology research has been regulated for more than a quarter century.

Under the original NIH Guidelines, published by the NIH Recombinant DNA Advisory Committee (RAC) in 1976, six types of experiments were prohibited. However, once it became clear that recombinant DNA research could be conducted safely, without an adverse impact on public health or the environment, these prohibitions were replaced by a system of tiered oversight and review, in which Institutional Biosafety Committees (IBCs) and Institutional Review Boards (IRBs) at individual facilities replaced the RAC as the primary oversight authority for most categories of regulated research.

Today, only two categories of laboratory research involving recombinant DNA technology are subject to NIH oversight. The first, “major actions,” cannot be initiated without the submission of relevant information on the proposed experiment to the NIH Office of Biotechnology Activities (OBA), and they require IBC approval, RAC review, and NIH director approval before initiation. This covers experiments that involve the “deliberate transfer of a drug resistance trait to microorganisms that are not known to acquire the trait naturally if such acquisition could compromise the use of the drug to control disease agents in humans, veterinary medicine, or agriculture.” The second category of experiments requiring IBC approval and NIH/ OBA review before initiation involves the cloning of toxin molecules with a median lethal dose (the dose found to be lethal to 50 percent of those to which it is administered) of less than 100 nanograms per kilogram of body weight. Unlike the requirements in the new select agent rules, the NIH Guidelines apply only to research conducted at institutes in the United States and abroad that received NIH funding for recombinant DNA research. Many private companies are believed to follow the guidelines voluntarily.

In addition to requiring prior approval for these two types of experiments, HHS and USDA asked for input from the scientific community on other types of experiments that might require enhanced oversight because of safety concerns, as well as on the form that such additional oversight should take. In particular, they sought comments on experiments with biological agents that could increase their virulence or pathogenicity; change their natural mode of transmission, route of exposure, or host range in ways adverse to humans, animals, or plants; result in the deliberate transfer of a drug-resistant trait or a toxin-producing capability to a microorganism in a manner that does not involve recombinant DNA techniques; or involve the smallpox virus.

A new global body is needed to oversee and approve research involving the most dangerous controlled agents.

Interestingly, the ASM did not rule out the possible need for additional oversight of certain types of microbiological research. However, in its comments on the draft HHS regulations, the ASM recommended that any additional oversight requirements be implemented through the NIH Guidelines rather than regulations, in order to provide a less cumbersome means of incorporating changes as technology evolves. The ASM also proposed the creation of a Select Agent Research Advisory Committee to provide advice to U.S. government agencies, including reviewing specific research projects or categories of research for which additional oversight is required.

A number of the domestic measures described above were also incorporated in the U.S. proposal to the Biological Weapons Convention (BWC) review conference in October 2001. Three months earlier, the United States had rejected the legally binding protocol that had been under negotiation to strengthen the 1972 treaty’s prohibition on the development, production, and possession of biological agents. In its place, the United States suggested a variety of largely voluntary measures to be pursued on a national basis by individual countries. This included a proposal that other countries adopt legislation requiring entities that possessed dangerous pathogens to register with the government, as is being done in the United States. The United States also proposed that countries implement strict biosafety procedures based on World Health Organization (WHO) or equivalent national guidelines, tightly regulate access to dangerous pathogens, explore options for national oversight of high-risk biological experiments, develop a code of conduct for scientists working with pathogens, and report internationally any biological releases that could affect other countries adversely. After an acrimonious meeting, which was suspended for a year after the U.S. call for the termination of both the protocol negotiations and the body in which they were being held, it was agreed that experts would meet for a two-week period each year to discuss five specific issues. Most of the issues related to strengthening controls over pathogens will be considered at the first experts’ meeting, to be held in August 2003.

U.S. approach falls short

The past several years have thus witnessed a range of U.S. initiatives aimed at reducing the likelihood that advances in the biological sciences will be used for destructive purposes. But whether viewed as a whole or as a series of discrete steps, the current approach falls short in a number of important respects:

The new controls on human, plant, and animal pathogens are too narrowly focused on a static list of threat agents. These controls can be circumvented entirely by research such as the poliovirus experiment, which demonstrated a means of acquiring a controlled agent covertly, without the use of pathogenic material; or like the mousepox experiment, which showed how to make a relatively benign pathogen into something much more lethal.

The expanded bioterrorism research effort is rapidly increasing the number of researchers and facilities working with the very pathogens that U.S. policy is seeking to control, before appropriate oversight procedures for such research have been put into place. Little thought appears to have been given to the fact that the same techniques that provide insights into enhancing our defenses against biological agents can also be misused to develop even more lethal agents.

The proposed restrictions on sensitive but unclassified research will not prevent similar research from being undertaken and published in other countries. Depending on the form such restrictions take, they could also increase suspicions abroad about U.S. activities, impede oversight of research, and interfere with the normal scientific process through which researchers review, replicate, and refine each other’s work and build on each other’s discoveries.

The new oversight requirements for certain categories of biotechnology research, like the NIH Guidelines on which they are based, subject only a very narrow subset of relevant research to national-level review. And if the ASM proposal to implement these and other additional oversight requirements through the NIH Guidelines is accepted, these requirements will no longer have the force of law, unlike requirements contained in regulations.

Finally, because of the current U.S. antipathy toward legally binding multilateral agreements, the BWC experts’ discussions on pathogen controls are unlikely to result in the adoption of a common set of standards for research that could have truly global implications.

As the mousepox experiment showed, advanced microbiological research is occurring in countries other than the United States. According to the chairman of the ASM Publications Board, of the nearly 14,000 manuscripts submitted to ASM’s 11 peer-reviewed journals during 2002, about 60 percent included non-U.S. authors, from at least 100 different countries. A total of 224 of these manuscripts involved select agents, of which 115, or slightly more than half, had non-U.S. authors. Research regulations that apply only in the United States therefore will not only be ineffective but will put U.S. scientists at a competitive disadvantage. The need for uniform standards, embodied in internationally agreed rules, is abundantly clear.

In order to be effective and to be accepted by those most directly affected, a new oversight arrangement must, in addition to being global in scope, also achieve a number of other objectives. First, it must be bottom-up. Rather than being the result of a political process, like the select agent regulations or the proposed U.S. government publication restrictions, any oversight system must be designed and operated primarily by scientists: those that have the technical expertise to make the necessary judgments about the potential implications of a given experiment.

Second, the system must be focused. It must define the obligations of individual scientists precisely in order to avoid uncertainty as to what is required to comply with agreed rules. This means relying on objective criteria rather than assessments of intent. This is especially important if the oversight system is legally binding, with possible penalties for violators. It also must be as limited as possible in terms of the range of activities that are covered. Not all microbiological research can or should be subject to oversight. Only the very small fraction of research that could have destructive applications is relevant.

Third, it must be flexible. Like the NIH Guidelines, any new oversight arrangement must include a mechanism for adapting to technological change. Most current concerns revolve around pathogens–either the modification of existing pathogens or the creation of new pathogens that are more deadly than those that presently exist. But as Steven Block has noted, “black biology” will in the not-too-distant future lead to the development of compounds that can affect the immune system and other basic life systems, or of microorganisms that can invade a host and unleash their deadly poison before being detected.

Finally, any new oversight arrangement must be secure. Both the genetic modification work undertaken as part of the Soviet offensive program and the more recent U.S. biodefense efforts underscore the importance of including all three relevant research communities–government, industry, and academia–in any future oversight system. This will require the development of provisions that allow the necessary degree of independent review without, at the same time, jeopardizing government national security information or industry or academic proprietary interests.

What then, might an internationally agreed oversight system aimed at achieving these objectives look like? To help explore this question, the Center for International and Security Studies at Maryland (CISSM) has, as part of a project launched even before September 11 and the anthrax attacks, consulted extensively with a diverse group of scientists, public policy experts, information technology specialists, and lawyers. Out of these deliberations has emerged a prototype system for protective oversight of certain categories of high-consequence biotechnology research. To the maximum extent possible, we have drawn on key elements of the oversight arrangements already in place. Like the NIH Guidelines, our system is based on the concept of tiered peer review, in which the level of risk of a particular research activity determines the nature and extent of oversight requirements. Like the select agent regulations, our system also includes provisions for registration (or licensing), reporting, and inspections.

We call our prototype the Biological Research Security System. At its foundation is a local review mechanism, or what we term a Local Pathogens Research Committee. This body is analogous to the IBCs and IRBs at universities and elsewhere in the United States that currently oversee recombinant DNA research (under the NIH Guidelines) and human clinical trials (under Food and Drug Administration regulations). In our system, this local committee would be responsible for overseeing potentially dangerous activities: research that increases the potential for otherwise benign pathogens to be used as weapons or that demonstrates techniques that could have destructive applications. This could include research that increases the virulence of a pathogen or that involves the de novo synthesis of a pathogen, as was done in the poliovirus experiment. Oversight at this level would be exercised through a combination of personnel and facility licensing, project review, and where appropriate, project approval. Under our approach, the vast majority of microbiological research would either fall into this category or not be covered at all.

At the next level, there would be a national review body, which we call a National Pathogens Research Authority. This body is analogous to the RAC. It would be responsible for overseeing moderately dangerous activities: research involving controlled agents or related agents, especially experiments that increase the weaponization potential of such agents. This could include research that increases the transmissibility or environmental stability of a controlled agent, or that involves the production of such an agent in powder or aerosol form, which are the most common means of disseminating biological warfare agents. All projects that fall into this category would have to be approved at the national level and could be carried out only by licensed researchers at licensed facilities. The national body would also be responsible for overseeing the work of the local review committees, including licensing qualified researchers and facilities, and for facilitating communications between the local and international levels.

At the top of the system would be a global standard-setting and review body, which we term the International Pathogens Research Agency. The closest analogy to this is the WHO Advisory Committee on Variola Virus Research, which oversees research with the smallpox virus at the two WHO-approved depositories: the CDC in Atlanta and Vector in Russia. This new body would be responsible for overseeing and approving extremely dangerous activities: research largely involving the most dangerous controlled agents, including research that could make such agents even more dangerous. This could include work with an eradicated agent such as smallpox or the construction of an antibiotic- or vaccine-resistant controlled agent, as was done during the Soviet offensive program. All projects in this category would have to be approved internationally, as would the researchers and facilities involved.

In addition to overseeing extremely dangerous research, the global body would also be responsible for defining the research activities that would be subject to oversight under the different categories and overseeing implementation by national governments of internationally agreed rules, including administering a secure database of information on research covered by the system. It would also help national governments in meeting their international obligations by, for example, providing assistance related to good laboratory practices. No existing organization currently fulfills all of these functions.

A more robust system

In today’s climate of heightened concern about bioterrorism, the idea of building on existing oversight processes to put in place a more robust system of independent peer review of high-consequence research seems less radical than when CISSM began this project in 2001. In the United States, there is a growing awareness that current domestic regulations do not provide adequate protection against the use of biotechnology research for destructive purposes. In May 2002, a senior White House Office of Homeland Security official urged the scientific community to “define appropriate criteria and procedures” for regulating scientific research related to weapons of mass destruction. In the coming months, a special committee appointed by the National Academies will decide whether to recommend enhanced oversight of recombinant DNA research in the United States, above and beyond that currently regulated by the RAC.

Others are ahead of the United States in recognizing the global dimensions of the problem. In September 2002, the International Committee of the Red Cross called on governments, the scientific and medical communities, and industry to work together to ensure that there are “effective controls” over potentially dangerous biotechnology, biological research, and biological agents. And in the run-up to the continuation of the BWC review conference last fall, the British counterpart to the National Academies, the Royal Society, called for agreement on a “universal set of standards for research” for incorporation into internationally supported treaties.

Thoughtful individuals will disagree about the research activities that should be covered by a new oversight arrangement, as well as the appropriate level of oversight that should be applied. They will also debate whether such a system should be legally binding, as envisioned in the prototype being developed by CISSM, or of a more voluntary nature, as has been suggested by researchers at Johns Hopkins University. But with each report of yet another high-consequence research project, fewer and fewer will doubt the nature of the emerging threat. Enhanced oversight of U.S. research is necessary but not sufficient. Common standards, reflected in internationally agreed rules, are essential if the full promise of the biotechnology revolution is to be realized and potentially dangerous consequences minimized. Our approach is one possible way of achieving that important goal.

“Nonlethal” Chemical Weapons: A Faustian Bargain

On October 26, 2002, approximately 50 Chechen separatist guerrillas took over a Moscow theater, holding about 750 people hostage. The hostage-takers were well armed with automatic weapons and grenades, and the females were wired with high explosives. They demanded the withdrawal of Russian troops from Chechnya, and threatened to kill the hostages and themselves if their demand was not met. The Russian government refused to negotiate. On the 28th, Russian special forces troops stormed the theater, first releasing a potent narcotic (a derivative of the opiate anesthetic fentanyl) into the ventilation system. When the troops burst into the main hall, they found the hostages and hostage-takers in a coma. The unconscious Chechens were all shot dead at point blank range, and the hostages were rushed to hospitals. In the end, approximately 125 hostages died of overdose; the rest–more than 600–survived. A number of the survivors are likely to have permanent disability. Opiate overdose causes respiratory depression that can starve the brain of oxygen, causing permanent brain damage when prolonged. It took hours to evacuate and treat the hostages. Aspiration pneumonia, a frequent complication of opiate overdose, may also cause permanent damage.

This dramatic event brought into focus a debate that has been simmering in arms control circles for several years, barely noticed by the general public: whether “nonlethal” chemical weapons are legal, and, if they are, whether it is a good idea to develop them. Proponents have argued for some time that situations exactly like the one in Moscow justify the use of such weapons. A more likely result, however, is that these weapons will turn out to be a Faustian bargain–with temporary benefits and high costs.

Was the chemical attack during the Moscow hostage rescue legal under international law? The 1993 Chemical Weapons Convention (CWC) bans the development, production, stockpiling, and use of chemical weapons. It defines chemical weapons as all toxic chemicals and all devices specifically designed to deliver them. Toxic chemicals are defined as chemicals that “cause death, temporary incapacitation, or permanent harm to humans or animals.” Thus, chemical incapacitants are clearly prohibited. (The term chemical incapacitant is preferable to nonlethal chemical weapon because none of the possible agents is really nonlethal, as the death of so many Moscow hostages dramatically demonstrated.)

Under the CWC, however, there are four specific purposes for which toxic chemicals can be used without being considered chemical weapons: peaceful medical, agricultural, research, or pharmaceutical purposes; protective purposes (such as testing defenses against chemical weapons); military purposes not dependent on toxicity (many compounds in high explosives and other munitions are toxic, but their toxicity is irrelevant to their function); and “law enforcement including domestic riot control.” The first three of these are not relevant here, but the fourth clearly is. In Moscow, Russia was enforcing its own domestic law on its own territory, and so the use of a chemical incapacitant did not violate the CWC. Whether the development, production, and stockpiling of the agent was originally intended for this purpose, and therefore legal, is unknown.

Thus there are two rather different questions: Given that it is legal to develop and use these weapons for law enforcement purposes, is it wise? And how should prohibited military development be deterred?

Any program developing chemical incapacitants has to start with the realization that they are dangerous and will cause a significant death toll when used at levels that will incapacitate most of those exposed. How high a toll is hard to estimate with certainty, but even with optimistic assumptions, lethality of 10-20 percent has to be expected (see www.fas.org/bwc/papers/sirens_song.pdf), which is what happened in Moscow. Therefore they should be used only as a last resort.

But could it work twice? The Moscow hostage rescue could be considered a success, since more than 80 percent of the hostages were recovered alive. But the next time terrorists engage in hostage-taking, they will certainly be prepared for the use of incapacitants, with gas masks and possibly antidotes. (There is a readily available and effective antagonist to opiates.) Such minimal preparations will completely defeat the advantage of chemical incapacitants and render them nearly useless for the specific scenario that proponents cite as requiring them. Since we can assume that terrorists or criminals would be prepared to defend themselves against chemical incapacitants, what legitimate uses would there be for such desperate measures? I believe that they have little utility for law enforcement in democratic societies.

But criminals, terrorists, and dictators will find them to be quite useful. The ideal targets for chemical incapacitants are people who cannot protect themselves, perhaps do not even expect an attack, and whose death is acceptable. Used by terrorists in conjunction with other weapons, such as incendiary devices or high explosives, chemical incapacitants could prevent flight and thus increase death tolls. Or they could provide a means to neutralize security forces silently, preserving surprise in the first few minutes of an attack on targets such as government buildings. Criminals might also find uses for such weapons; there is already a serious problem with chemical incapacitants being used to facilitate rape. Security forces in despotic regimes could use these agents to immobilize protesters rather than disperse them, as is done with existing riot control agents, thus allowing protesters to be taken into state custody. If chemical incapacitants become weapons in the arsenal of law enforcement agencies, they will enter the legal global trade in police weapons and be as available to despotic regimes as they are to democracies. They will also quickly enter the black market in arms, where they will be readily available to criminals and terrorists.

Like all chemical weapons, chemical incapacitants are primarily weapons for attacking the defenseless. Chemical weapons were used extensively in World War I, but neither side gained any significant advantage from them, because both sides were able to develop them and both deployed defenses. But after the war, two countries used them effectively against tribal peoples unable to defend themselves or retaliate in kind: the Spanish in Morocco and the Italians in Ethiopia. Chemical incapacitants will be the same: relatively ineffectual weapons for law enforcement because of their significant lethality and the ease of defense. But in the hands of terrorists, criminals, torturers, or despots, who care little about the lethality and whose victims are defenseless, they could pose a serious threat.

Military use of incapacitants

Of course, militaries would have additional uses for such weapons if they were willing to ignore the legal obstacles. Chemical incapacitants could have utility in urban warfare and in military operations other than war (counterterrorism, peacekeeping, and so forth). They would be quite attractive to special forces, which could use them to silently incapacitate opponents behind enemy lines. Thus stockpiles of chemical incapacitants for law enforcement would pose a nearly irresistible temptation to those who wanted to divert them to military purposes.

Would this be a bad thing? Nonlethal weapons are often perceived as a humane alternative to lethal weapons. Yet chemical incapacitants cause levels of lethality comparable to those of military firearms (about 35 percent), artillery (about 20 percent), grenades (about 10 percent), and civilian handguns (about 10 percent). Chemical weapons used in World War I were similar; they killed about 7 percent of casualties. Chemical incapacitants are clearly in the same category with respect to their lethality: There is no basis whatsoever for calling them nonlethal or less lethal or any of the other euphemisms that proponents use to imply a categorical difference.

The pursuit of chemical incapacitants is likely to be the first step in the exploitation of pharmacology and biotechnology for hostile purposes.

Furthermore, the military use of nonlethal weapons is more often an adjunct to, not a replacement for, lethal force. The history of the U.S. military use of tear gas is a case in point. During the Vietnam War, the United States used tear gas extensively. The public rationale was identical to that now being cited for chemical incapacitants: humanitarian goals of reducing civilian deaths in situations in which combatants and noncombatants were mixed. Although tear gas was occasionally used for that purpose, the major use by far was to drive enemy troops from cover and make them more vulnerable to small arms fire, artillery, and aerial bombing. Thousands of tons of tear gas were used between 1966 and 1969, disseminated in hand grenades, rifle-propelled grenades, artillery shells, rockets, bombs, and helicopter-mounted bulk dispensers. Although considered highly successful by the military, the practice was widely condemned, and in 1975 President Ford issued Executive Order (EO) 11850, which restricted the use of riot control agents to “defensive military modes to save lives,” such as riot control in territories under U.S. control, cases where civilians are used as shields, the rescue of downed aircrews or escaping prisoners-of-war, and use behind the lines to protect convoys.

The United States, alone among the 150 parties to the CWC, argues that the convention allows riot control agents. The argument rests in part on the CWC’s different definitions of toxic chemicals (causing “death, temporary incapacitation, or permanent harm to humans or animals”) versus riot control agents (chemicals that “can produce rapidly in humans sensory irritation or disabling physical effects which disappear within a short time following termination of exposure”). Thus, the United States does not consider “sensory irritation or disabling physical effects” to be a form of “temporary incapacitation.” It also does not consider these chemicals to be toxic, despite the fact that they have caused many deaths and permanent disability.

The United States thus asserts that military use of riot control agents is limited by the CWC only by a single sentence: Riot control agents may not be used “as a method of warfare.” This prohibition would prevent future use of riot control agents in the way they were used in Vietnam, but the United States believes that it permits their use under the terms of EO 11850. Indeed, Secretary of Defense Donald Rumsfeld testified to Congress that he intended to request presidential approval to use riot control agents in case of war with Iraq. This would be most unfortunate, because the rest of the world would consider this to be chemical warfare. It would vitiate the U.S. argument that the war is a moral one, with part of its purpose being to enforce the CWC.

Despite its uniquely liberal interpretation of the CWC restrictions on the use of riot control agents, it is hard to imagine that even the United States could consider chemical incapacitants as anything other than toxic chemicals, and thus fully covered by the CWC. The intent of chemical incapacitants is, after all, to temporarily incapacitate victims, and their high lethality (compared with riot control agents) makes it clear that they are toxic chemicals. Thus the development of such agents as weapons would, in order to be legal, have to be for law enforcement purposes only; no military development, production, possession, or use would be permitted.

What might such a legal program look like? It would be administered, performed, and funded by nonmilitary agencies, such as the Department of Justice. The rationale under which approval is secured would mention only law enforcement purposes. The work would be unclassified. The safety requirement for agents would be compatible with domestic use. The munitions developed to deliver the agent would be those in common use by police.

Unfortunately, U.S. research into chemical incapacitants fails to satisfy these criteria (see www.sunshine-project.org). Most of the projects have been originated and funded by the military. The rationales refer almost exclusively to military scenarios, including urban warfare, military operations other than war, and even major theater war. Much of the work is classified. And a 81-mm mortar shell with a range of several miles is being developed to deliver “nonlethal” payloads, including chemicals. Only in the area of safety standards does the U.S. program appear to be consistent with law enforcement; the goal is an agent that causes less than 0.5 percent fatalities (comparable to tear gas).

Although research per se is not prohibited by the CWC, and it appears (at least on the basis of unclassified material) that the United States has not yet passed the threshold of prohibited chemical weapons development, its research is nevertheless provocative and destabilizing. The overt interest in prohibited agents and the repeated assertion that they could have military utility make it appear that it is only the lack so far of a suitable agent that has prevented the United States from entering prohibited territory. This perception, whether accurate or not, seriously erodes the U.S. claim to the moral high ground vis-à-vis countries like Iraq.

Of course, much of the U.S. research on chemical incapacitants may be classified, and this further reduces the confidence others can have in U.S. compliance with the CWC. Although the lead agency in this effort (the Marine Corps’ Joint Non-Lethal Weapons Directorate) has denied any current efforts to develop chemical incapacitants, retired Rear Admiral Stephen Baker has claimed that special forces are now equipped with “knockout gases” that he expects will be used in Iraq if needed. Clarification of this serious charge is urgently needed.

It would do a great deal of good if President Bush or Secretary Rumsfeld would unambiguously disavow any intent to develop chemical incapacitants as military weapons, deny any current possession or deployment (or, if necessary, order their immediate destruction), and explicitly acknowledge that such incapacitants are prohibited by the CWC. If this were coupled to a commitment to forgo the use of riot control agents against Iraq, the United States could recapture some of the legitimacy lost because of ambiguity concerning its own compliance with the CWC.

The potential for misuse

The pursuit of chemical incapacitants for law enforcement purposes will turn out to be a Faustian bargain at best; their pursuit for military purposes would violate the CWC. By far the best policy option is to eschew this category of weapon entirely and to exert leadership in the international arena to ensure that others do the same.

Although there is some possibility that chemical incapacitants might be useful law enforcement tools in certain special circumstances, the ease of protection against them means that such circumstances will be rare. Their utility will thus be very limited and not worth the price that inevitably would have to be paid.

The price is high in many ways. Incapacitants have a much greater potential to be used by dictators, terrorists, or criminals than by law enforcement. And the temptation to divert such weapons to military uses will be immense. Certainly the world has taken note of the persistent U.S. military interest and the apparent Russian stockpiles. Many nations probably disbelieve U.S. claims that it is restricting itself to permitted research activities, and may thus be encouraged to begin their own clandestine development and stockpiling. In this way a new chemical arms race may begin.

Even if development and stockpiling of incapacitants were scrupulously restricted to law enforcement purposes, the CWC would nonetheless be fundamentally undermined. The overarching purpose of the CWC is to prevent nations from entering military conflict with chemical weapons that they are prohibited from using. The ban on use is secondary, since that ban has been in place since the 1920s. If we permit stockpiles of chemical incapacitants for law enforcement that could be instantly redirected to military use, we have seriously subverted the CWC.

In the long run, the pursuit of chemical incapacitants is likely to be the first step in the exploitation of pharmacology and biotechnology for hostile purposes. It would be naïve to think that this exploitation could be confined to domestic law enforcement. Even if the more trustworthy nations observe their treaty commitments, many others will be seduced by the military utility of pharmacological weapons. And during the next several decades, scientific advances will almost certainly see a tremendous expansion of our capabilities to manipulate human consciousness, emotions, motor control, reproductive capacity, behavior, and so forth. Such capacities have potential for great medical benefits, and this potential (along with the profits that could be made in applying the benefits) ensures that progress will be rapid. But they also have entirely novel and terrifying potential for abuse. Our challenge is to bequeath to our children a future in which the benefits of biotechnology and pharmacology are realized, but their abuses contained. This is a formidable challenge; no militarily useful technology has ever been successfully eschewed.

The Bush administration should unambiguously disavow any intention to develop chemical incapacitants as military weapons.

One important measure would be a new international treaty that prohibits the hostile manipulation of human physiology, particularly with respect to the central nervous system and reproductive physiology. The European Parliament has already called for “a global ban on all developments and deployments of weapons which might enable any form of manipulation of human beings” (European Parliament resolution A4-0005/1999, www3.europarl.eu.int/), and the International Committee of the Red Cross has urged states “to adopt at a high political level an international Declaration on Biotechnology, Weapons and Humanity containing a renewed commitment to existing norms and specific commitments to future preventative action” (www.icrc.org). Given the rapid rate of scientific progress, it is urgent to establish a clear understanding that any manipulation of human biochemistry or genetics for hostile purposes is completely unacceptable. A new international convention would be an important step in establishing such a norm.

Negotiating a new international treaty will take years. More immediately, Congress should initiate active oversight of the nonlethal weapons programs of the Departments of Defense, Energy, and Justice, of the Central Intelligence Agency, and of any other agencies involved. Such oversight should pay specific attention to the long-term policy issues. Since much of this research is probably classified, and thus unknowable to the media or the public, only congressional oversight can ensure that it is conducted in accordance with the best long-term interests of the United States and the world. The long-term potential of biotechnology and pharmacology to be used to do harm is too serious a policy issue to be left to the military, where short-term tactical considerations may lead to unwise decisions.

The United States is certain to be the critical player; it is the world’s preeminent biotechnology and pharmaceutical power, and the world’s foremost military power. For better or worse, the United States will lead the way into the exploitation of biotechnology as weaponry or into a robust ethical and political system preventing such exploitation. The choice is ours; we should make it actively, and not slide unwittingly into a future we have not chosen and may bitterly regret.

A Program for Africa’s Computer People

Jacob Aryetey has two personal computers on his desk, only one of which is connected to the Web. In his case, the Web-connected one is the anomaly. Aryetey is alone among the four computer science faculty at the University of Ghana to have Web access in his office. A native of Ghana, he is chairman of a computer science department that graduates about three dozen students a year.

A database specialist, the 48-year-old Aryetey is on the front lines of a little-known aspect of Africa: the drive to develop a home-grown cadre of software programmers and computer engineers who can make an African city–maybe Accra, Ghana’s capital–a hub of information technology (IT) activity similar to India’s Bangalore. Ghana, a country of 20 million people that is sandwiched on the coast of West Africa between the Ivory Coast and Togo, has a democratically elected government; has never had a civil war; and has seen a vast expansion in the past five years in computing and communications capabilities.

Sub-Saharan Africa, of course, is better known for war, famine, natural disaster, and the HIV/AIDS pandemic. Many Africans have armed themselves with machetes and machine guns, not personal computers. But Aryetey is one African who is bent on disproving the stereotype that people from his part of the world cannot participate creatively in the digital world.

To be sure, being a computer professor in Ghana is daunting. Aryetey joined the department six years ago and teaches three classes a term, for which he is paid about $300 a month. His salary is large by the standards of his country, where nurses in public hospitals earn $50 a month and police officers even less. But good software programmers can earn twice or three times as much as a professor if they work in business. Demand for competent software programmers is high, so high that Aryetey struggles to fill open faculty slots. When he last found a competent, experienced person, he eagerly offered him a job. “I never heard from the person again,” Aryetey recalls, “not even the courtesy to tell me he wasn’t interested.”

Aryetey says he could not remain in his university position were it not for his outside consulting activities. “My ability to work outside is what keeps me here,” he says. There is no limit on the amount of time he can spend on other work; he even can cancel university classes (and has) if outside deadlines loom.

Without more faculty, Aryetey believes that instruction in the computer science department will remain inconsistent. “Some courses were designed ten to fifteen years ago,” he says. Lecturers, gleaned from Accra’s small community of commercial programmers and hardware engineers, bring more current practices into the classroom, but few volunteer to teach. The $5-an-hour salary, even though it does cover preparation and transportation, is not very appealing.

Isolated from the global intellectual currents in his field and short of help, Aryetey chiefly concentrates on maintaining a minimal standard for the seven to eight courses offered by the department each term. The university cannot offer a full-fledged bachelor of science degree because there are too few teachers, so students must double-major in another discipline such as math, physics, or chemistry. By senior year, about 35 students remain in the program. Aryetey, in addition to all his other activities, personally advises them all. He estimates that about five members of each graduating class are, in his view, “international class” in software and computer engineering skills. “Our emphasis is to give the fundamental principles in computer software,” he says.

Gaps in learning, however, are significant. One afternoon, Kwesi Debra, the chief codewriter at the Bank of Ghana, visits campus to talk with computer science students about future careers. After explaining that he only the week before took over a class in the computer language C++ (from a professor who left suddenly for Scotland), Debra expresses his dismay that some of the third- and fourth-year students in his class had never even written or compiled a program in the computer language being taught and that in another class they are studying an assembly language from the 1960s. “I believe most of what you are learning here isn’t relevant,” he tells the students, then adds: “Your curriculum must be changed . . . It must be relevant to the needs of industry.”

The better students in the computer science department recognize the most glaring inadequacies of their education. The department’s computer lab has only about two dozen working PCs–none connected to the Internet. Some students write programs in longhand, then later type them into the computer. More determined students pay to use the Web café on campus, but at 50 cents an hour they can’t afford much time online.

By their senior year, the best students often have exhausted the department’s resources and are left to forage for knowledge on their own. They are not encouraged to get work experience and must search on their own to arrange volunteer internships. Students fret over “outdated material,” such as “five-year old handouts,” and lecturers who come to class unprepared or don’t show up at all. “We wait 30 minutes and then we will go,” says one fourth-year student. She adds: “Worse, the lecturers never offer to make up [a missed] class.”

Students say they have no one to whom they can complain. “You are not advised to complain,” says one of the top students in the department. “We’ve seen cases where lecturers retaliate against you. We don’t have the freedom to complain.” By comparison, the student says, more established departments provide stronger instruction and greater support. “In computer science, the university doesn’t care about us.”

Aryetey admits that the computer science department is a poor stepchild to older academic disciplines and explains that the university is frozen in time, with relatively large resources devoted to a department of statistics, because in the 1960s when the university’s priorities were set (and largely remain), computer science was in its infancy as an academic field and statistics was central to the social sciences.

Individuals take the lead

The task of reforming technical and scientific education in Ghana is urgent, but the government possesses neither the resources nor the roadmap to lead the effort. Ghana is typical of countries in sub-Saharan Africa. The best Ghanaian students try to land places as undergraduates in U.S. or European universities, hoping to remain abroad after graduation. One who did was Patrick Awuah, who went on to become a code writer at Microsoft. In his mid-30s, Awuah decided to leave Microsoft in order to help Ghana, his native country, better compete in the software world. Showing no shortage of ambition, he founded a new university 18 months ago.

The university is called Ashesi, which means “beginning” in Twi, the country’s dominant traditional language. It is housed in an attractive compound in the central Accra neighborhood of Labone. To ensure that students gain a foundation in the school’s core subjects, Ashesi offers a fixed lineup of courses for the first two years. These courses creates a common experience for students, help to maintain quality of instruction, and reduce the cost of running the school. The goal is to blend training in software engineering with liberal arts and business studies. In early 2002, Ashesi began its second year of instruction, with a freshman class twice the size of the previous year.

The very existence of a Patrick Awuah comes as a shock to theorists of underdevelopment and the digital divide. Africa is not supposed to supply code writers to Microsoft, and it certainly is not expected to get them back older, wiser, and more idealistic. Yet Awuah is literally trying to bring the spirit of Silicon Valley to Accra. Awuah is a quiet revolutionary, bent on creating a cadre of successful technology business leaders who are public-spirited and committed to lifting Africa by its bootstraps into the age of cyberspace. “We’re not just building a technical workforce,” he says. “We’re training ethical and entrepreneurial business leaders.”

Academics can help by exposing African professors to relevant materials on the Web that would freshen course materials and excite students.

Awuah, now 37 years old, lives in Seattle, shuttling to and from Ghana to administer the university. He plans to move and live full time in Ghana in mid-2003. Launching a university, he admits, is a gamble, both professionally and personally. In addition to raising $2.6 million in charitable donations on behalf of the school–some from other former Microsoft employees–Awuah has invested his own money as well. “We’re taking some big risks here,” he says. In order to maintain Web access for its faculty and students, Ashesi must spend $1,800 a month for a satellite link. Because of severe shortcomings in Ghana’s public telecom system, Awuah’s university must create and maintain its own infrastructure. This in itself is a step forward for Ghana. As recently as three years ago, private data networks, linked by satellite to the outside world, were unknown (and indeed essentially illegal) in Ghana.

The Ashesi experiment is drawing the attention of government officials and education policymakers, but it alone is not the answer. The annual tuition of more than $1,000 put Ashesi out of reach for all but a fortunate few. In the absence of either a good public university or an affordable private one, enterprising, computer-obsessed Ghanaian youth are crafting their own way forward. They grab whatever training they can: a mish-mash of distance learning over the Web and paid training courses available from private centers in the city. Some of these computer enthusiasts work in Web cafes, others manage computer networks, and a few customize standard software programs.

Dan Odamtten is one of these people who tailors programs, which requires him to learn programming scripts. Odamtten, 29, has only a high-school diploma. His father wanted him to become a nurse, but “I thought computers were the future,” he says. To get started, Odamtten took a nine-month course at a computer institute, for which his mother paid the fees without telling her husband. Odamtten learned how to program in BASIC and, as an exercise, wrote a payroll program. Unable to find a computer job when he graduated, he convinced a local software house, which specializes in supplying programs to small banks, to train him without pay.

Odamtten began by installing shrink-wrapped software for the company’s banking clients. After six months the company decided to put him on the payroll, but at only $30 a month. After another six months, he was asked to customize a program in MS-DOS. He has since moved to customizing Windows programs. The company now counts him as among its best programmers and pays him a few hundred dollars a month, or about five times the salary of a nurse. Despite his success, Odamtten worries about “falling behind,” because it so difficult to acquire new skills.

The pressure to keep up with the pace of change is even greater for the relatively few programmers in Accra who write original code. These programmers usually have some university training but are largely self-taught. One of the most thoughtful and active programmers in Accra is Guido Sohne. The son of a successful civil engineer, Sohne showed aptitude for computers in secondary school, posted a near-perfect score on his math SATs, and gained admission to Princeton University. But after two years, he flunked out because of poor study habits and failure to attend class. “I was too smart for my own good,” he says. “I didn’t go to class. I didn’t take things seriously.” Instead, he surfed the Internet constantly, becoming an accomplished player of multiple-user computer games. “On the Web, I was this superpowerful being, reaching the apex of my power–around exam time,” he recalls. In his final quarter at Princeton, Sohne failed three classes.

That was about a decade ago. Sohne returned to Ghana with something to prove and sought help from Nii Narku Qauynor, a pivotal figure in Ghana’s computer scene. A native of Ghana with a Ph.D. in computer science from the State University of New York at Stonybrook, Quaynor had in the early 1990s returned to live in Ghana after more than 10 years working for Digital Equipment Corporation. Quaynor was the first significant instance of an accomplished technologist returning to Ghana from abroad, and he would go on to form a networking company in Accra that brought Internet access to a West African country for the first time.

Quaynor helped Sohne found a software services company, which turned over an impressive $30,000 in revenues over two years before Sohne, ever restless, grew bored with the business and closed it. He then worked for a couple of years as the computer network manager of Soft, the pioneer software house in Accra. Today he works independently as a code writer, battling such difficult conditions as an absence of good tools and frequent power outages. Often, he codes in his parents’ bedroom on his father’s PC. With an electricity supply marked by frequent surges, drops, and interruptions, he says, “We just have to make saving every five minutes a habit.”

Sohne is an advocate of open-source code and is an important voice in the emerging debate over protections on intellectual property in Ghana and the potential benefits of choosing public-domain software over proprietary programs such as those sold by Microsoft. Ghana, as a member of the World Trade Organization, is under pressure to revise and update its existing copyright law, which makes no explicit reference to software or digital media. Draft legislation to enact a U.S.-style system of protections for software has been proposed, but no action has been taken for many months as the government conducts a study that is expected to lay the basis for a national IT policy. Sohne opposes tight protections on software. He argues that although the country’s small software producers need to benefit from their intellectual property, they also need to draw on the intellectual property of the United States and Europe in order to develop a pool of knowledge out of which African innovations may flow.

For programmers such as Odamtten and Sohne, there is no place to go to improve their skills. The instruction at the commercial computer schools in Accra is too basic, and universities don’t offer challenging courses geared to adult students. There is no place in Ghana, for instance, to get a master’s degree in any subject related to software or computer engineering. Professional bodies are weak or nonexistent. Ghana has an association of engineers, but the group devotes little time to computing or electrical engineering. There is an association of “Internet professionals,” but the emphasis of the group is on marketing and business, not technical issues.

Sohne copes with his situation by foraging on the Web for useful bits, sometimes e-mailing Americans or Europeans whom he has never met for help. In late 2002, he wrote to a programmer in Utah, asking for an algorithm to help with a phone billing system that he was writing for Busyinternet, the Web café where he has kept an office. The American sent him a useful algorithm for free and Sohne responded, in hacker spirit, by sending him his completed billing code.

Forging technical links with foreigners can be difficult, however. Neither of the major U.S. professional bodies for computer engineers or software programmers, the Institute of Electrical and Electronic Engineers (IEEE) or the Association of Computing Machinery (ACM), has tailored memberships to people living in poor, remote countries. In the fall of 2002, Samuel Oduro, an electrical engineer, inquired about membership in IEEE, which has just a handful of members in Ghana, and was disappointed at the high cost of membership. The lowest fee rung, for engineers earning under $11,000, calls for a membership fee of $70. Even if Oduro can scrape together the money, he has no mechanism to pay. He doesn’t have a credit card (the normal way to pay on the Web) and the IEEE won’t take a check from his local bank (in Ghana’s currency). “Even if I want to pay the $70, how do I do send the money?” he asks.

Brain drain

The steady flow of educated people out of Africa puts great pressure on the university professors and technical professionals who remain behind. Although sub-Saharan Africa has the lowest educational achievement on average of any region in the world, African immigrants to the United States on average have spent more time in school than not only native-born Americans but every other immigrant group. According to the United Nations (UN), as many as 30,000 Africans living outside of the continent hold doctoral degrees. Thus, African migration to the United States, and to a lesser extent to Britain, France, Germany, and Holland, is a migration of elites. The elite migration pattern is especially applicable to Ghanaians (look no further than UN secretary general Kofi Annan, who hasn’t lived in his country of origin for decades). One estimate, cited in the World Competitive Yearbook 2001, claims that 26 percent of the professionals educated in Ghana today live in developed countries; that is about eight times the percentage for India and China.

Most of the professionals leaving Ghana are doctors, accountants, and especially nurses. In the late 1990s alone, more than a thousand nurses may have left the country to take jobs in nurse-hungry Britain, South Africa, and northern Europe. Ghana simply doesn’t produce a large enough number of electrical engineers and computer scientists to match the numbers of other professionals. But because demand for skilled computer people is already so high in Accra, Ghana’s largest labor market, even a small flow of departures hurts.

Activists in the public-domain software movement can educate Africans about the importance of open-source software.

Some of the best technical talent in Ghana leaves the country after secondary school to attend British or U.S. universities. These students are unlikely to ever return to Ghana, because the skills they gain from attending top universities essentially “price them out” of the Accra labor market. The question of brain drain is central to any analysis of the transformative potential of technology in Ghana. Recruitment of new code writers–even at an average starting salary of $500 a month, or 10 times the wages of a policeman or a nurse–is difficult. And retaining those who are hired is a problem. With no university within Ghana offering a master’s degree in computer science, people who want advanced training often leave the country. “Keeping skills, stopping the brain drain, is our number one priority,” says David Bolton, a British-born Ghanaian who manages programmers at Soft. “As soon as a programmer realizes what he can earn in the United States, how do you keep him?” Bolton, whose task is to find ways to keep code writers at home, points to his own decision to leave Britain a decade ago and move to Ghana, where his mother was born. “We have a good quality of life, but programmers need the latest tools, challenges, and rewards,” he says.

The shortage of accomplished technical people raises costs and reduces output. “There are not a lot of good people,” says an Australian who until recently served as the engineering chief of a wireless phone company in Accra. “The good ones become consultants, and they are bloody expensive.”

There is no quick fix to the brain drain, and government policymakers seem flummoxed by the situation. Some have considered educating fewer people in computers or electrical engineering, because so many emigrate and provide no benefit to Ghana. But the government needs to boost enrollments in order to make Ghana a place where skilled computer workers can thrive and advance.

One intriguing possibility is to mobilize a planned software institute that will initially help the government improve its own use of IT. Initial funds for the institute, likely to open in the second half of 2003, come from India, whose government was privately importuned by Kofi Annan to assist his country. India, whose prowess in software is well known, agreed to outfit a research and training lab and to tutor the inaugural group of Ghanaian instructors for six months in India. The government chose the group of trainees from its own civil servants, thus missing an opportunity to reward some of the country’s talented but undertrained working programmers. But if managed wisely, the institute could transcend its initial mission of modernizing government operations to become a powerful magnet for the country’s top programmers, who need a public center for advanced training in software engineering in order to undercut the temptation to exit Ghana.

To be sure, the brain drain won’t stop, but perhaps it can be tamed. Quaynor argues that Ghana must produce more IT professionals, even if the domestic economy can’t absorb them. If they succeed elsewhere in the world, he believes, “these people can be mobilized from a distance.” And he warns against making it too hard for Ghanaians outside of the country to contribute back home. “Let them contribute easily and earn a reward.”

Ways to help

U.S. universities are not set up to do charitable work in developing countries, but with support from the government and international organizations they can play an important role in helping African universities and their surrounding technical communities. Providing meaningful help will be challenging, but conditions in Africa are far from hopeless. As Ghana’s case illustrates, there is a growing computer community that has the skills and interest to take advantage of stronger links to U.S. computer scientists and electrical engineers. Americans would be building on an existing base–fragile and immature, but dynamic. I recommend beginning with a few inexpensive and flexible efforts that can be done without Americans leaving the comfort of their offices and that will set the stage for integrating the African high-tech sector with the rest of the world.

Form a distance-learning partnership with a computer science department. The Massachusetts Institute of Technology and several other universities are making the materials for a number of courses available online. It would be a short step to identify various universities in sub-Saharan Africa that would encourage 5 or 10 of their best computer science students to audit a U.S. course electronically. There is no better way for an African to learn something of the state of the art in a field–and where his or her own educational deficits are–than to learn what the pacesetters are learning. U.S. professors can also help by exposing professors in Africa to relevant materials on the Web that would freshen course materials and excite students. And all of this would be done electronically.

Nurture a buddy system. Promising technical workers in Ghana face a constant battle against isolation, loneliness, and a shortage of good “inputs.” E-mail offers a wonderful antidote to this. Why can’t graduate students in top electrical engineering and computer science departments begin “pen-pal” relationships with programmers and computer engineers in Africa? This sounds prosaic, but one-on-one interactions are the basis for forming wider technical networks. Such networks don’t include Africans, and they should. And not only the Africans would benefit. Wouldn’t American technical talent gain from learning about the needs of less privileged students and professionals?

Enlist Africans in the global project of writing open-source code. Activists in the public-domain software movement can educate African code writers and computer scientists about the importance of open-source software and help them to master the techniques of assembling such code. Open source carries special significance for Africa, where the cost of software can be an insurmountable obstacle to progress. By way of full disclosure, I am involved in the early stages of a small collaboration, of the sort I am suggesting here, between Jerry Feldman, a computer scientist at the University of California at Berkeley, the Finnish philosopher Pekka Himanen (author of The Hacker Ethic), and a dozen volunteer programmers and university computer science students in Accra, Ghana. The goal is to create an instance of what Himanen describes as “open community development.” Such collaborations could add to the inventory of open-source code and even create code that addresses specific unmet social needs in Africa. Even if no useful programs get created, the exchange between U.S. computer scientists and African university students and young programmers seems worthwhile.

Provide access to information. U.S. associations of computer scientists and electrical engineers ought to offer more flexible ways for students, faculty, and working professionals in sub-Saharan Africa to benefit from the many journals and other materials produced by these associations. The question is not how to enroll Africans as members but rather to expose them to something of the latest trends in various fields and to remove any gratuitous barriers to African IT professionals who want to join groups such as ACM or IEEE.

Create a business presence. A major U.S. computer or software company could open a small outpost in an African city, dedicated to outreach with local universities and the identification of technical talent. Even a single engineer or programmer could make an enormous contribution to forging links between technically savvy Africans and the global technology mainstream. Corporations have opened such technical outposts in China, India, Russia, and elsewhere. It is easy to say it can’t be done in Africa, yet the environment for such an experiment seems right in Ghana.

International organizations such as the World Bank or the United Nations Development Program could increase the relevance of African universities by helping them to improve links to practitioners in the IT community. International donors could support the creation of satellite campuses dedicated to the further education of computer and communications professionals. Individuals also can help. The government of India has funded the creation of a software center of excellence in Accra, and two European aid agencies have provided the bulk of the funding for Ghana’s first formal venture capital fund. As the commercial high-tech industry grows in Africa, aid organizations such as USAID could help encourage links between entrepreneurs and African academics that improve the quality of instruction in computing and engineering programs, as well as help commercial companies meet their human resource needs.

Finally, international organizations could help African universities gain efficiencies by developing regional approaches to education. Ghana, for example, could be encouraged to cooperate with its African neighbors so that one country could build a specialty in, say, aviation engineering, while another concentrates on computer networking. Students could then be encouraged to attend the university that best suits their interests, regardless of national boundaries.

The combination of computing and communications is transforming developing countries, especially Africa, where great distances, a harsh climate, and poor infrastructure have long hampered development. But technological innovations do not arise in isolation from the people who create and use these innovations. Social networks shape technological choices, which reflect the values of the members of these human networks. Informal contacts between African and U.S. IT people might seem to be an insignificant contribution to the formidable challenge of raising the technical level in sub-Saharan Africa, but these contacts will create a necessary sense of inclusion among Africans in the global computer community. These transnational exchanges can also help identify and empower future leaders in Africa’s fledgling computer scene. In an African country, a small number of capable, motivated, and intellectually nourished people can exert a substantial influence over educational and technological opportunities. If Africa’s computer professionals succeed, they will provide an essential building block for wider economic development and an inspiring example to developing-country professionals in other sectors.

The Hazards of High-Stakes Testing

With the nation about to embark on an ambitious program of high-stakes testing of every public school student, we should review our experience with similar testing efforts over the past few decades so that we can we benefit from the lessons learned and apply them to the coming generation of tests. The first time that there was a large-scale commitment to accountability for results in return for government financial assistance was in the 1960s, with the beginning of the Title I program of federal aid to schools with low-income students. The fear then was that minority students, who had long been neglected in the schools, would also be shortchanged in this program. The tests were meant to ensure that the poor and minority students were receiving measurable benefits from the program. Since that time, large-scale survey tests have continued to be used, providing us with a good source of data to use in to determine program effects and trends in educational achievement.

Critics of testing often argue that the test scores can sometimes provide an inaccurate measure of student progress and that the growing importance of the tests has led teachers to distort the curriculum by “teaching to the test.” In trying to evaluate these claims, we need to look at the types of data that are available and their reliability. In other words, what we know and how we know it. For example, when people claim that there is curriculum distortion, they are often relying on surveys of teachers’ perceptions. These data are useful but are not the best form of evidence if policymakers believe that teachers are resisting efforts to hold them accountable. More compelling evidence about the effects of testing on teaching can be obtained by looking directly for independent confirmation of student achievement under conditions of high-stakes accountability. Early studies revealed very quickly that the use of low-level tests produced low-level outcomes. When students were evaluated only on simple skills, teachers did not devote time to helping them develop higher-order thinking skills. This was confirmed in the well-known A Nation at Risk report in the early 1980s and about a decade later in a report from the congressional Office of Technology Assessment.

In 1991, I worked with several colleagues on a validity study to investigate more specifically whether increases in test scores reflected real improvements in student achievement. In a large urban school system in a state with high-stakes accountability, random subsamples of students were given independent tests to see whether they could perform as well as they had on the familiar standardized test. The alternative, independent tests included a parallel form of the commercial standardized test used for high-stakes purposes, a different standardized test that had been used by the district in the past, and a new test that had been constructed objective-by-objective to match the content of the high-stakes test but using different formats for the questions. In addition to content matching, the new test was statistically equated to the high-stakes standardized test, using students in Colorado where both tests were equally unfamiliar. When student scores on independent tests were compared to results on the high-stakes accountability test, there was an 8-month drop in mathematics on the alternative standardized test and a 7-month drop on the specially constructed test. In reading, there was a 3-month drop on both the alternative standardized test and the specially constructed test. Our conclusion was that “performance on a conventional high-stakes test does not generalize well to other tests for which students have not been specifically prepared.”

At the same time that researchers addressed the validity of test score gains, studies have also been done to examine the effect of high-stakes accountability pressure on curriculum and instructional practices. These studies, which involved large-scale teacher surveys and in-depth field studies, show that efforts to improve test scores have changed what is taught and how it is taught. In elementary schools, for example, teachers eliminate or greatly reduce time spent on social studies and science to spend more time on tested subjects.

More significantly, however, because it affects how well students will eventually understand the material, teaching in tested subjects (reading, math, and language arts) is also redesigned to closely resemble test formats. For example, early in the basic-skills accountability movement, Linda Darling-Hammond and Arthur Wise found that teachers stopped giving essay tests as part of regular instruction so that classroom quizzes would more closely parallel the format of standardized tests given at the end of the year. In a yearlong ethnographic study, Mary Lee Smith found that teachers gave up reading real books, writing, and long-term projects, and focused instead on word recognition, recognizing spelling errors, language usage, punctuation, and arithmetic operations. Linda McNeil found that the best teachers practiced “double-entry bookkeeping,” teaching students both what they needed for the test and the real knowledge aimed at conceptual understanding. In other cases, test preparation dominated instruction from September until March. Only after the high-stakes test was administered did teachers engage the real curriculum such as Shakespeare in eighth-grade English. These forms of curriculum distortion engendered by efforts to improve test scores are strongly associated with socioeconomic level. The poorer the school and school district, the more time devoted to instruction that resembles the test.

I believe that policymakers would benefit from seeing concrete examples of what students can and cannot do when regular teaching closely imitates the test. One high-stakes test for third graders included a math item showing three ice cream cones. The directions said to “circle one-third of the ice cream cones.” Correspondingly, the district practice materials included an item where students were to circle one-third of three umbrellas. But what we have learned from research is that many students who have practiced this item only this way cannot necessarily circle two-third of three ice cream cones, and most certainly cannot circle two-thirds of nine Popsicle sticks.

Other systematic studies show dramatically what students don’t know when they learn only the test. In a randomized experiment conducted by Marilyn Koczer, students were trained exclusively to translate either Roman to Arabic numerals or Arabic to Roman. Then random halves of each group were tested on their knowledge using either the same order as their original training or the reverse order. Students who were tested in reverse order from how they had practiced, were worse off by 35 to 50 percentile points, suggesting that the high test performance for those tested in the same order as practiced does not necessarily reflect deep or flexible conceptual understanding.

We also have to be careful in listening to discussions of alignment between the curriculum and the test. It is not enough that each item in the test correspond to some standard in the curriculum. To be useful, the test items must cover a wide array of standards throughout the curriculum. Many teachers will teach to the test. That’s a problem if the test is narrowly structured. If the test covers the full domain of the curriculum, then there is no great harm in teaching to the test’s content. But there still can be a problem if students are trained to answer questions only in multiple-choice format. They need to be able to write and reason using the material.

I suggest conscientious discussions by school faculties to sort out differences between legitimate and illegitimate test preparation.

The setting of performance standards, which is usually done out of sight of the public, can have a powerful effect on how the results are perceived. Texas made two interesting choices in setting its standards. It wisely made the effort to coordinate the standards across grades. For example, in setting the 10th-grade math standard, it also considered where to set the standard for earlier grades that would be necessary to keep a student on schedule to reach the 10th-grade standard. Although policymakers set the standard by saying they wanted students to know 70 percent of the basic-skills test items, this turned out to be the 25th percentile of Texas students. Selecting a low performance standard was wise politically, because it made it possible to show quick results by moving large numbers of students above this standard.

My state of Colorado made the educationally admirable but politically risky decision to set extremely high standards (as high as the 90th percentile of national performance in some areas) that only a pole-vaulter could reach. The problem is that it’s hard to even imagine what schools could do that would make it possible to raise large numbers of students to this high level of performance. Unless the public reads the footnotes, it will be hard for it to interpret the test results accurately.

These political vicissitudes explain why psychometricians are so insistent on preserving the integrity of the National Assessment of Educational Progress (NAEP), which is given to a sample of children across the country and that teachers have no incentive to teach to, because the results have no direct high-stakes consequences for themselves or their students. The test’s only purpose is to provide us with an accurate comparative picture of what students are learning throughout the country.

If states so choose, they can design tests that will produce results that convey an inflated sense of student and school progress. There may also be real gains, but they will be hard to identify in the inflated data. NAEP is one assessment mechanism that can be used to gauge real progress. NAEP results for Texas indicate that the state is making real educational progress, albeit not at the rate reflected in the state’s own test. Texas is introducing a new test and more rigorous standards. Let’s hope that it provides a more realistic picture.

There are signs that Congress understands the possibility that test data can be corrupted or can have a corrupting influence on education. Even more important, it has been willing to fund scientific research studies to investigate the seriousness of these problems. In 1990, Congress created the NAEP Trial State Assessment and concurrently authorized an independent evaluation to determine whether state assessments should become a regular part of the national assessment program. More recently, Congress commissioned studies by the National Research Council to examine the technical adequacy of proposed voluntary national tests and the consequences of using tests for high-stakes purposes such as tracking, promotion, and graduation. Even President Bush’s new testing plan shows an understanding of the principle that we need independent verification of reported test score gains on state accountability tests.

The nation’s leaders have long faced the problem of balancing the pressures to ratchet up the amount of testing with uncertainty about how to ensure the validity of tests. Ten years ago, many policymakers embraced the move toward more authentic assessments as a corrective to distortion and dumbing-down of curriculum, but it was then abandoned because of cost and difficulties with reliability. We should remember that more comprehensive and challenging performance assessments can be made equal in reliability to narrower, closed-form machine-scorable tests, but to do so takes more assessment tasks and more expensive training of scorers. The reliability of the multiple-choice tests is achieved by narrowing the curricular domain, and many states are willing to trade the quality of assessment for lower cost so that they can afford to test every pupil every year and in more subjects. Therefore, we will have to continue to evaluate the validity of these tests and ask what is missed when we focus only on the test. Policymakers and educators each have important roles to play in this effort.

Policymakers

Preserve the integrity of the database, especially the validity of NAEP as the gold standard. If we know that the distorting effects of high-stakes testing on instructional content are directly related to the narrowness of test content and format, then we should reaffirm the need for broad representation of the intended content standards, including the use of performance assessments and more open-ended formats. Although multiple-choice tests can rank and grade schools about as well as performance assessments can, because the two types of measures are highly correlated, this does not mean that improvements in the two types of measures should be thought of as interchangeable. (Height and weight are highly correlated, but we would not want to keep measuring height to monitor weight gain and loss.) The content validity of state assessments should be evaluated in terms of the breadth of representation of the intended content standards, not just “alignment.” A narrow subset of the content can be aligned, so this is not a sufficient criterion by itself.

The comprehensiveness of NAEP content is critical to its role as an independent monitor of achievement trends. To protect its independence, it should be sequestered from high-stakes uses. However, some have argued that NAEP is already high-stakes in some states, such as Texas, and will certainly become more high-stakes if used formally as a monitor for federal funding purposes. In this case, the integrity of NAEP should be protected substantively by broadening the representation of tasks within the assessment itself (such as multiple-day extended writing tasks) or by checking on validity through special studies.

Evaluate and verify the validity of gains. Special studies are needed to evaluate the validity of assessment results and to continue to check for any gaps between test results and real learning. I have in mind here both scientific validity studies aimed at improving the generalizability of assessments and bureaucratic audits to ensure that rewards for high-performing schools are not administered solely on the basis of test scores without checking on the quality of programs, numbers of students excluded, independent evidence of student achievement, and so forth. Test-based accountability systems must also be fair in their inferences about who is responsible for assessment results. Although there should not be lower expectations for some groups of students than for others, accountability formulas must acknowledge different starting points; otherwise, they identify as excellent schools where students merely started ahead.

Scientifically evaluate the consequences of accountability and incentive systems. Research on the motivation of individual students shows that teaching students to work for good grades has harmful effects on learning and on subsequent effort once external rewards are removed. Yet accountability systems are being installed as if there were an adequate research-based understanding of how such systems will work to motivate teachers. These claims should be subjected to scientific evaluation of both intended effects and side effects, just as the Food and Drug Administration would evaluate a new drug or treatment protocol. Real gains in learning, not just test score gains, should be one measure of outcome. In addition, the evaluation of side effects would include student attitudes about learning, dropout rates, referrals to special education, attitudes among college students about teaching as a career, numbers of professionals entering and leaving the field, and so forth.

Many have argued that the quality of education is so bad in some settings, especially in inner-city schools, that rote drill and practice on test formats would be an improvement. Whether this is so is an empirical question, one that should be taken seriously and examined. We should investigate whether high-stakes accountability leads to greater learning for low-achieving students and students attending low-scoring schools (again as verified by independent assessments). We should also find out whether these targeted groups of students are able to use their knowledge in nontest settings, whether they like school, and whether they stay in school longer. We should also try to assess how many students are helped by this “teaching the test is better than nothing” curriculum versus how many are hurt because richer and more challenging curriculum was lost along with the love of learning.

Educators

Locate legitimate but limited test preparation activities within the larger context of standards-based curriculum. Use a variety of formats and activities to ensure that knowledge generalizes beyond testlike exercises. Ideally, there should be no special teaching to the test, only teaching to the content standards represented by the test. More realistically, very limited practice with test format is defensible, especially for younger students, so they won’t be surprised by the types of questions asked or what they are being asked to do. Unfortunately, very few teachers feel safe enough from test score publicity and consequences to continue to teach curriculum as before. Therefore, I suggest conscientious discussions by school faculties to sort out differences between legitimate and illegitimate test preparation. What kinds of activities are defensible because they are teaching both to the standards and to the test, and what kinds of activities are directed only at the test and its scoring rules? Formally analyzing these distinctions as a group will, I believe, help teachers improve performance without selling their souls. For example, it may be defensible to practice writing to a prompt, provided that students have other extended opportunities for real writing; and I might want to engage students in a conversation about occasions outside of school and testing when one has to write for a deadline. However, I would resolve with my colleagues not to take shortcuts that devalue learning. For example, I would not resort to typical test-prep strategies, such as “add paragraph breaks anywhere” (because scorers are reading too quickly to make sure the paragraph breaks make sense).

The most critical task is to evaluate the consequences of high-stakes testing and accountability-based incentive systems.

Educate parents and school board members by providing alternative evidence of student achievement. Another worthwhile and affirming endeavor would be to gather alternative evidence of student achievement. This could be an informal activity and would not require developing a whole new local assessment program. Instead, it would be effective to use samples of student work, especially student stories, essays, videotapes, and extended projects as examples of what students can do and what is left out of the tests. Like the formal validity studies of NAEP and state assessments, such comparisons would serve to remind us of what a single test can and cannot tell us.

Of these several recommendations, the most critical is to evaluate the consequences of high-stakes testing and accountability-based incentive systems. Accountability systems are being installed with frantic enthusiasm, yet there is no proof that they will improve education. In fact, to the extent that evidence does exist from previous rounds of high-stakes testing and extensive research on human motivation, there is every reason to believe that these systems will do more to harm the climate for teaching and learning than to help it. A more cautious approach is needed to help collect better information about the quality of education provided in ways that do not have pernicious side effects.