Solar Energy from the Tropical Oceans

The recent climate change conference in Kyoto underscored once again how profoundly the world needs new energy sources that do not produce carbon dioxide or create other environmental problems. Yet little attention is being paid to one completely untapped resource with the potential to become an enormous energy store. It is ocean thermal energy conversion (OTEC), an option largely neglected since the energy crisis of the 1970s.

OTEC is an application of solar energy that exploits the heat that the ocean captures from the sun’s rays. It is particularly appealing because the energy it generates can produce enormous quantities of nonpolluting fuels (such as hydrogen and ammonia) for transportation and also furnish energy for other applications that are now dependent on fossil fuels. It thus has environmental advantages over fossil fuels and nuclear power; avoids land-use problems associated with renewable energy technologies such as solar, wind, biomass, and hydroelectric power; and has the potential to produce far more useful and affordable energy than could be obtained from other renewable sources.

OTEC is a technology for converting some of the energy that the tropical oceans absorb from the sun, first into electricity and then into fuels. During an average day, the 60 million square kilometers of surface waters of the tropical oceans (located approximately 10 degrees north to 10 degrees south of the equator) absorb one quadrillion megajoules of solar energy-equivalent to the energy that would be released by the combustion of 170 billion barrels of oil per day. The surface waters are a warm-water reservoir 35 to 100 meters deep that is maintained night and day at a temperature of 25 to 28 degrees celsius (°C). Below about 800 meters, an enormous source of ice-cold water, which is fed by currents flowing along the ocean bottom from the northern and southern polar regions, is maintained at about 4°C.

OTEC uses this temperature difference to generate electric power. In principle, it is not complicated. Warm water is drawn from the surface layer into a heat exchanger (boiler) to vaporize a liquid with a boiling point of about -30°C (liquid propane, liquid ammonia, and several fluorocarbons are examples). The vapor drives a turbine attached to an electric generator. Exhaust vapor from the turbine is subsequently condensed in a second heat exchanger, which is cooled by water pumped from the cold water source below. The condensed vapor is then returned to the boiler to complete a cycle that will generate electricity 24 hours a day throughout the year (with a few weeks of down time for plant maintenance).

Analysis of the OTEC cycle indicates that equatorial OTEC plant ships slowly “grazing” on warm surface water at 1/2 knot could continuously generate more than 5 megawatts-electric (MWe) of net electric power per square kilometer of tropical ocean. The electricity generated would be converted to chemical energy on board the plant ship by electrolyzing water into hydrogen and oxygen. For some uses, such as furnishing fuel for the space shuttle, these chemicals can be liquefied and stored for periodic transfer to shore. However, to provide products that can be handled more easily for delivery to world ports, the hydrogen would be combined on shipboard with nitrogen (extracted from the air via liquefaction) to synthesize ammonia. Methanol fuel may also be produced with a supply of carbon, which coal colliers could bring to the plant ship. Engineering studies indicate that OTEC plant ships designed to produce 100 to 400 MWe (net) of electricity (which is between 10 and 40 percent of the output of a large conventional power plant) would be the optimum size for commercial operation.

The U.S. Department of Energy (DOE) sponsored engineering designs that were developed between 1975 and 1982 by industrial teams under the technical direction of the Johns Hopkins University Applied Physics Laboratory (APL). Designs are available for a 46-MWe pilot OTEC plant ship that would produce 15 metric tons per day of liquid hydrogen (or 140 metric tons per day of liquid ammonia) in a conventional chemical plant installed on the OTEC vessel. It would use the same synthesis process that produces ammonia on land but would eliminate the costly methane-reforming step of that process.

An APL conceptual design is available for a 365-MWe commercial OTEC plant ship that would produce 1,100 metric tons per day of liquid ammonia. Used as a motor vehicle fuel, this could replace approximately 150,000 gallons of gasoline. If operating experience confirms the utility of this conceptual design, 2,000 OTEC ammonia plant ships could supply enough ammonia fuel per day to match the total daily mileage of all the automobiles presently in the United States. If these plant ships were distributed uniformly over the tropical ocean, an area of about 60 million square kilometers, they would be spaced 175 kilometers apart.

A history of success

OTEC’s potential for providing the United States with an alternative to imported oil was recognized in l974 after the Organization of Petroleum Exporting Countries imposed its oil embargo. Between l975 and l982, DOE spent approximately $260 million on OTEC R&D in a detailed analysis of OTEC technical feasibility. Foreign studies also contributed to our information about OTEC. The findings included:

Technical feasibility. Tests and demonstrations at reasonable scale validated the power cycle performance; the cold water pipe design, construction, and deployment; the OTEC plant ship’s ability to withstand l00-year storms (storms of an intensity that occurs, on average, once in 100 years); the durability of its materials; and methods for controlling biofouling of the heat exchangers.

Successful at-sea tests of a complete OTEC system (Mini-OTEC), including a 2,200-foot cold water pipe, were conducted with private funding near Kailea-Kona, Hawaii, in 1979. The program employed a Navy scow as a platform and used off-the-shelf components supplied by industrial partners in the venture. In four months of operation, Mini-OTEC generated 50 kilowatts-electric of gross power, which confirmed the engineering predictions. It demonstrated total system feasibility at reduced scale and was the first demonstration of OTEC net power generation.

A heat-exchanger test vessel, OTEC-1, was deployed with DOE funding in l980 and satisfactorily demonstrated projected heat-exchanger performance, water-ducting, and biofouling control at a 1-MWe scale. These results provided the scientific justification for the planned next step-a 40-MWe pilot plant demonstration.

Environmental effects. Effects of the environment on OTEC plant ship operations and effects of OTEC on ocean ecology were studied and analyzed. Hurricanes do not occur near the equator where OTEC plant ships will be deployed. Small-scale water-tunnel tests indicated that the pilot plant ship and cold water pipe can withstand equatorial 100-year-storm conditions with a good safety margin. A commercial 365-MWe OTEC ammonia plant ship would be about the size of a large oil tanker and would be less affected by waves and current than was the pilot plant.

OTEC uses large volumes of warm and cold water that pass through fish barriers to the heat exchangers and are mixed and discharged at the bottom of the ship. The discharged waters are denser than the surface ocean waters, so they descend to a depth of about 500 meters, there spreading laterally to form a disk where the density of the discharged plume matches that of the ambient ocean water. Diffusion of heat from this layer to the surface is negligible for one plant ship. But effects on the surface layer could become detectable and possibly significant if large numbers of plant ships were deployed close together, or if the cold nutrient-rich water discharged were deliberately mixed into the surface layer. This option could lead to a substantial increase in marine life, like to that occurring off Peru where upwelling brings nutrient-rich cold water to the surface.

Plant ship spacing would have to be chosen on the basis of an acceptable tradeoff between total power delivery and environmental impact. If one-tenth of one percent of the incident solar energy were converted to electricity, one square kilometer of ocean would generate 0.2 MWe of net electric power. Roughly 1,800 square kilometers could supply solar heat for continuous operation of a 365-MWe OTEC plant. This would mean an average spacing between ships of 45 kilometers, and the fuel produced would be equivalent to 14 times the total U.S. gasoline energy use in l996.

OTEC ammonia fuel commercial development. Tests of ammonia fuel (shown in bench tests to have an octane number of 130) in a four-cylinder Toyota engine have demonstrated performance at an optimum fuel-air ratio, in accord with theoretical predictions. Early work indicated that some hydrogen, which could be supplied by partial dissociation of the ammonia entering the engine or by other means, would be needed in an ammonia-fueled internal combustion engine to achieve adequate performance over the desired operating range. The tests show that operation at slightly fuel-rich conditions reduces nitrogen oxide emissions to one-tenth the concentration observed in today’s automobiles.

Further work to develop OTEC fuels could lead to significant reductions in carbon emissions, air pollution, and oil imports.

The physical properties of ammonia are nearly the same as those of liquid propane, so the current procedures established for gas-tight safe handling and storage of liquid propane in automobiles and filling stations are applicable to ammonia. Ammonia can become a practical motor vehicle fuel, but much more engine R&D and storage and delivery design work will be necessary to define the total system requirements and costs for widespread ammonia car operations.

Ammonia is a major industrial chemical presently made by partial oxidation (reforming) of natural gas. Liquid ammonia is produced and distributed safely worldwide by tankers, pipelines, and trucks in quantities exceeding 100 million metric tons per year. Commercial experience in producing, storing, and transporting liquid ammonia (and hydrogen) suggest that adherence to existing regulations will ensure safe operations. Most ammonia is used as fertilizer and is commonly sprayed directly on the soil by individual farmers. It has a penetrating odor and is toxic in high concentrations but does not burn or explode at atmospheric pressure. No serious health-related problems or explosive hazards have been experienced in its use.

Competitiveness and financing. OTEC systems are “low” technology. Operating temperatures and pressures are the same as those in household air conditioners. About two-thirds of the required OTEC system components and subsystems are commercially available. Another 10 to 15 percent need to be scaled up and optimized for OTEC use, which adds some cost unpredictability. Only the cold water pipe construction, platform attachment, and deployment will require new types of equipment and procedures. If we assign l00 percent cost uncertainty to these items, the overall investment uncertainty of the OTEC system is around 15 to 25 percent. This relatively low uncertainty permits cost estimates to be made with reasonable confidence.

The ultimate sales price of fuel from OTEC plant ships depends on the cost to amortize plant investment (including construction costs) over plant life, plus operation and maintenance costs, including shipping to consumers. For a range of scenarios, the cost of OTEC-ammonia delivered to U.S. ports is estimated to vary from $0.30 to $0.60 per gallon (in 1995 dollars). Adjusting for the lower mileage per gallon of ammonia, this would be equivalent to gasoline costing $0.80 to $1.60 per gallon. These estimates are strongly dependent on assumed interest rates, amortization times, and whether tax credits and other subsidies that are available to gasoline users would be available to ammonia producers as well.

In the future, gasoline prices are expected to increase because of resource depletion. But with improved technology and expanded production, prices for ammonia produced by OTEC should decrease.

Finishing the job

The seven-year DOE R&D program provided positive answers to doubts about OTEC. It demonstrated at a reasonable scale that the OTEC concept for ocean energy production is technically feasible. The next step was to have been construction of a 40-MWe (nominal) pilot plant that would provide firm cost and engineering data for the design of full-scale OTEC plant ships. Planned funding for this step was canceled in 1982 when the Reagan administration, with different energy priorities from those of the Carter administration, took office. Since l982, government support of OTEC development has been undercut further by the drop in oil prices that has reduced public fears of an oil shortage and its economic consequences, as well as by the opposition of vested interests that are committed to conventional energy resources.

Lack of support for OTEC research is part of a general lack of interest in energy alternatives designed to address fundamental problems that will not become critical for several decades. If and when the need for measures to forestall energy shortages and/or severe environmental effects from present energy sources becomes evident, the long lead times needed for the costly transition from fossil fuels to sustainable energy resources may prevent action from being taken in time to be effective. It is prudent to renew OTEC R&D now.

There are questions about OTEC that cannot be answered without further development and testing: the effects of scale-ups on projected costs, required spacing of a network of OTEC plant ships to satisfy environmental restrictions, logistical problems associated with the widespread use of ammonia as a transportation fuel, and requirements for good performance in automobile and other combustion systems.

The date by which OTEC might be expected to become commercially viable is too far off to attract entrepreneurs. In view of the substantial capital cost of plant ship construction, government support is essential. At this stage, we cannot promise that OTEC will be commercially viable. But it is certainly promising enough to justify a substantial federal research investment.

We believe that further work to develop OTEC fuels could lead, by the middle of the next century, to significant reductions in carbon emissions, air pollution, and oil imports. The nation should be willing to make a small investment in designing, building, and evaluating a 45-MWe OTEC plant ship to demonstrate the feasibility and economics of the concept on a scale large enough to permit confident construction and operation of full-scale commercial OTEC plant ships.

The United States should begin a program that includes the following features:

  • Identify potential suppliers of OTEC systems and components, and bring up to date the predictions of their costs and performance. The survey should include new high-performance OTEC heat-exchanger options that have been demonstrated in R&D programs.
  • Conduct systems engineering studies to define a program with a long-range goal of OTEC ammonia plant ship development and commercialization that could attract government and industry support. We recommend an introductory program, lasting a few years, that will test attractive options with cost-shared funding. It would include analysis and experimental programs to provide firm data for heat-exchanger optimization (including designs, materials, and costs) and the hydrodynamics of OTEC water inlet and exhaust trajectories, including their interactions with surface and subsurface water flows and temperatures.
  • Define potential roles of government and industry in initiating and conducting the development program.
  • Determine a schedule and funding plan for OTEC development that would make it possible to have significant OTEC commercial operation by the year 2050.

Inventing the Future

For the past fifty years the U.S. national science and technology enterprise has evolved under the heavy influence of the engineer Vannavar Bush. Most students of science and technology policy as well as many practicing scientists and engineers know Bush’s role, but few people know much about Bush the man or the forces that shaped him and ultimately his ideas. G. Pascal Zachary in his Endless Frontier: Vannevar Bush, Engineer of the American Century, conveys for the first time a full picture of Bush as the inventor of machines as well as organizational systems.

We learn from Zachary’s detailed historical and psychological profile that Bush, the son of a New England minister, earned patents in areas such as topographical surface mapping and analog calculating before he graduated from college. He maintained his interest in invention as he progressed to become a professor at MIT. During the 1920s and 1930s he had extensive interaction with military R&D, which helped prepare him for the critical role he would play in World War II.

Zachary leaves no doubt that he considers Bush a visionary. As an architect of institutional design for national purposes, Bush’s achievement was on the level of Alexander Hamilton’s initial building of our national financial institutions in the 1790s or Woodrow Wilson’s efforts to influence and shape the design of the League of Nations in the 1920s. In the 1940s, as director of the Office of Science Research and Development, Bush mobilized and managed the innovative power of the entire U.S. network of academic and industrial scientists and engineers for the sole purpose of winning the war. In so doing, he glimpsed, perhaps for the first time in the United States, the power of organizing large-scale science and technology assets for attaining national political and economic goals. In fact, even before the end of the war, he saw that the nation might benefit from a permanent national investment in science and technology. With this simple vision, Bush set out in 1945 to design and implement a system for long-term state-supported basic scientific and technological research. This design and the subsequent institutions and programs developed for national science and technology were built around a number of characteristics central to Bush the man. Zachary does an excellent job of describing in great detail the psychological, social, and technological forces and constraints that shaped Bush and his subsequent design of the U.S. science and technology enterprise.

Zachary reveals how Bush’s thinking was characterized by order, vision, control, invention, and rationality, all of which served him well during the war years. But Bush can be seen as the returning war hero who has a hard time adjusting to civilian life. The take-charge certainty that made him so effective in marshalling national intellectual resources was ill-adapted to the political reality in which democracy would play an essential role in crafting science and technology policy.

Unlike Wilson or Hamilton, Bush lacked deep knowledge of philosophy, political theory, or history. He believed that the science and engineering approach to problem solving and national development was superior to the rougher hewn and often inexact political process associated with modern democracy. The combination of this belief with his wartime experience of almost unimaginable power led Bush to imagine a world in which science for the public good could be largely separated from public discourse. In fact, throughout this biography, we learn repeatedly that Bush did best when he had to interact only with elites, the resources were virtually unlimited, and oversight was nil. It was in this world, which for the most part was non-democratic in its character, that Bush felt that science and technology could best be guided for the national good.

A blinkered vision

Throughout his life, Bush retained his drive to invent, often working on several machines or systems at a time. One of his most remarkable ideas, which he developed over nearly 15 years, appeared in a 1945 Atlantic Monthly article. “As We May Think” details, with amazing clarity, what the Internet is today and what digital libraries will be tomorrow. Long before even the simplest computer existed, he could already conceive of advanced computational systems and their potential purposes. He could not accept, however, that computers would use digital systems rather than the analog system that he imagined. Zachary perceptively observes that this was a sign of Bush’s intellectual and practical limitations.

Zachary does not seem to notice another important limitation in Bush’s thinking. A gifted and confident rationalist, Bush had only a limited understanding of the nonrational–or in organizational parlance, politics. He did not anticipate how science would become part of the larger public policy arena. Ever the engineer, Bush developed a means by which a nation might do science but not a means by which a nation might determine what science to do. He took it for granted that only scientists and engineers could determine what science might be best pursued.

For their part, the leading politicians of the day were as unaware of the importance of science and technology as Bush was of politics. To President Truman, Bush was a bother. And the usually rational Bush could be quite emotional on the subject of Truman. To Bush’s political nemesis, Senator Kilgore of West Virginia, federal spending on research should be guided not by the needs of science but by its usefulness in attaining economic justice and equity. Given the limited perspectives of the major participants, we should not be surprised to learn that little progress was made in the direction of building a democratically oriented national science and technology enterprise. Bush’s vision took us to the first stage of democratic science: defining the means. Unfortunately, he could not see as far as the second stage, which is to define the relationship between means and ends and to develop a way to choose what end is most desired.

Ultimately, this meant that unlike the civilian-led military or the agricultural research system that links farmers closely with scientists, the national science enterprise design developed by Bush is quite limited and in serious need of completion. Readers of Zachary’s excellent biography of Bush should see clearly the limits of Bush’s vision and understand the constraints that these limits place on the nation’s ability to develop a science agenda that is driven by what the people want. No doubt Bush himself would want to do some tinkering with his invention by this time.

Vannevar Bush, inventor, dreamer, and war hero would be and should be proud of his two biggest inventions: conceptualization of how we might extend our minds beyond our physical limits through information technology and the way we might drive our national destiny through science and technology. Both inventions are the kind of things that change the way we think and as a result change our world.

Future Perspectives on Nuclear Issues

In the United States, we’ve traditionally optimized new advanced technologies to serve our nation’s needs; this has helped us craft an impressive economy and quality of life. With nuclear technologies, we have not followed this pattern. With only a few exceptions such as nuclear medicine, we have done a poor job of evaluating nuclear technologies, addressing real risks, and optimizing benefits. Instead, we worry about our dependence on fossil fuels and increasing oil imports, but we don’t use advanced nuclear energy systems that we’ve licensed and are selling overseas. Many environmentalists who want to reduce carbon emissions don’t want to consider nuclear power. We may worry about excessive stockpiles of nuclear weapons, but as we dismantle our own weapons, we store the complex classified components that would allow us to rapidly rebuild weapons. Some who are concerned about the dangers of nuclear waste oppose efforts to move the waste from power plants to a more remote and secure location or to explore systems that enable far better management of waste issues. We have consumer groups concerned about food safety that accept bacterial contamination of food instead of supporting irradiation of food supplies.

In a world of increasing global competition, we can’t afford to accept these contradictions. We can’t afford to abandon the broad suite of nuclear technologies when they hold real promise for further national benefits in many areas.

Although at first sight these issues appear to be distinct, they are tied together by their dependence on nuclear science and by strong public concerns about nuclear technologies in general. These public concerns have frequently been molded by an antinuclear movement focusing only on risks, both real and perceived, in ways that have been tremendously appealing to the mass media. Actions to address risks have rarely received equal attention and have suffered from lack of national leadership in key cases. In many cases, decisions and policies crafted in one policy arena are limiting our options in other arenas. We need a dialogue focused on benefits and risks of nuclear technologies. Where real risks exist, we need research focused on quantifying and mitigating them, followed by solid progress in addressing them. Where past programs have lacked leadership to achieve success, we need to energize that leadership. The time has come for a careful scientifically based reexamination of nuclear issues in the United States.

Energy issues

The United States-like the rest of the industrialized world – is aging rapidly. Between 1995 and 2030, the number of people in the United States over age 65 will double from 34 million to 68 million. Just to maintain our standard of living, we need dramatic increases in productivity as a larger fraction of our population retires from the workforce. Increased productivity requires abundant sources of economical energy. By 2030, almost a third of the population of the industrialized nations will be over 60. The rest of the world-today’s “underindustrialized” countries-will have only 16 percent of their population over age 60 and will be ready to boom. As those developing nations build economies modeled after ours, there will be intense competition for the resources that underpin modern economies. Competition for energy resources may be a key driver of future global instability.

Consider just a few facts about this future competition. In 1995, the United States, with 4.6 percent of the world’s population, consumed 22 percent of the world’s energy production and 28 percent of the world’s electricity. Of the 420 quads of energy used around the globe, the United States consumed about 91 quads, and 85 percent of that was derived from fossil fuels. By 2030, it is estimated that world energy use will be more than 800 quads, with the United States then using around 130 quads. This means that between 1995 and 2030, the United States will need to have additional energy resources of about 40 quads, but it will need to find these new resources at the same time when the rest of the world is finding another 400 quads. That will be real competition. Furthermore, the influence of the Persian Gulf on the world’s oil supplies is projected to sharply intensify during this period. The Gulf, which now accounts for about half of the world’s oil exports, is projected to account for about three-quarters by 2015. These simple facts should represent a national concern of the greatest magnitude.

The economic impact of the energy business in the United States is very large. We currently produce and import raw energy resources worth over $150 billion per year. Approximately $60 billion of that is imported oil or natural gas. We then process that material into energy feedstocks such as gasoline. Those feedstocks – the energy we consume in our cars, factories, and electric plants-are worth more than $500 billion per year.

We debate defense policy every year, as we should. But we don’t debate energy policy, even though it costs twice as much as our defense, other countries’ consumption is growing dramatically, and energy shortages are likely to be a prime driver of future military challenges.

And even when we have discussed energy independence, we’ve largely ignored public debate on the role of nuclear energy in achieving this independence.

We’ve certainly done little to encourage the use of nuclear energy. The public, with ample assistance from the antinuclear movement, was frightened by the Three Mile Island (TMI) and Chernobyl events. Unfortunately, they were not effectively informed that TMI, though certainly a major accident, led to no loss of life, because the plant was well engineered to contain any accident. Chernobyl was not. We have not completed actions to address the real risks of nuclear waste, but we place extremely stringent radiation exposure limits on all nuclear energy plants.

The future growing global competition for carbon-based energy resources strongly argues for a careful reevaluation of nuclear energy and a reassessment of the barriers to its current use in this country. For this reason, I believe it is in the national security interests of the United States to maximize our use of economical nonfossil energy sources wherever possible. With nuclear power already providing 20 percent of our electrical energy, it makes solid business sense to ask how we can best use this significant resource in the future.

The administration should be adding another reason to revisit nuclear energy. The president has outlined a program to stabilize the U.S. production of carbon dioxide and other greenhouse gases at 1990 levels by some time between 2008 and 2012, and the administration is strongly supporting a policy to control greenhouse gas emissions to avoid potential future climate changes. Unfortunately, I fear that the president’s goals are not achievable without seriously affecting our economy. A recent report from several of our national laboratories studied the issue and evaluated the impact of different carbon tax levels. They found that a $50/ton carbon tax would be needed to reach the president’s goals. But that would result in an increase of 12.5 cents per gallon in the price of gas and of 1.5 cents per kilowatt-hour (kWh) in the price of electricity-almost a doubling of the current cost of coal- or natural gas-generated electricity.

I have yet to hear the administration state that we need nuclear energy to meet the president’s goal, in spite of the fact that in 1996 nuclear power plants prevented the emission of 147 million metric tons of carbon, 2.5 million metric tons of nitrogen oxides, and 5 million metric tons of sulfur dioxide. Our electric utilities’ emissions of those greenhouse gases were 25 percent lower than they would have been if fossil fuels had been used instead of nuclear energy.

The United States developed a new generation of nuclear power plants, which are now being sold overseas and have been certified by the U.S. Nuclear Regulatory Commission. Although they are even safer than our current models, they aren’t being used in this country. Looking ahead, we are developing technologies such as passively safe reactors, lead-bismuth reactors, and advanced liquid metal reactors that generate less waste and are proliferation-resistant. Will they be used?

No new reactors have been ordered in this country for almost a quarter of a century, due at least in part to extensive regulation and endless construction delays, plus our national failure to address high-level waste disposal. These problems drive costs up, and nuclear power is now more expensive than power from fossil-fueled plants. The average price of nuclear power nationwide is close to 7 cents per kWh, which is almost double the current cost of electricity from a combined-cycle natural gas plant. But over time, increasing global demand will drive fossil fuel prices higher. At the same time, we need to seriously study and implement approaches to minimize the costs of nuclear plant construction. It should be noted that when the capital cost of a nuclear plant is excluded, the economics look much different. Operating costs of nuclear plants have improved every year and are now estimated to be 1.9 cents per kWh, which is quite competitive with other options.

We should consider stepping back further from the nuclear cliff by “de-alerting” weapons.

The effect of the lack of orders for new nuclear plants is that the nuclear energy technology now operating in the United States is over 20 years old. As our nuclear energy industry atrophies and our premier educational programs in nuclear energy wither, we are less and less able to influence the development of global nuclear energy policies. Yet the global development of nuclear energy can fundamentally affect our national security. If other nations develop this energy source without adequate safeguards, proliferation of fissile materials can enable acquisition of nuclear weapons by new nations and by rogue states, with serious consequences for global stability. Furthermore, if other major nations such as China do not use nuclear energy effectively, we may all be affected by environmental degradation resulting from their extensive use of fossil fuels. In fact, China is projected to be the world’s largest emitter of greenhouse gases by 2015.

The United States has created a regulatory environment in which nuclear energy is not seen as a sound investment, but nuclear plants are being planned in most of the rest of the world. We need absolute safety, that’s a given. But could we have that safety through approaches that don’t drive nuclear energy out of consideration for new U.S. plants? Deregulation of electric utilities will put additional pressure on optimizing costs for electric power sources, and nuclear energy may become temporarily even less attractive unless steps are taken during deregulation to favor technologies that avoid fossil fuels.

A recent report, Federal Energy Research and Development for the Challenges of the Twenty-First Century, done at the administration’s request by the President’s Committee of Advisors on Science and Technology and chaired by Harvard University’s John Holdren, calls for a sharply enhanced national effort in nuclear energy. It urges a “properly focused R&D effort to see if the problems plaguing fission energy can be overcome-economics, safety, waste, and proliferation.” I strongly endorse the conclusion of this report that we dramatically increase spending in these areas, for reasons ranging from reactor safety to nonproliferation.

Before leaving energy issues, I need to note another national decision about nuclear energy that is complicating progress today. In 1977, President Carter halted all U.S. efforts to reprocess spent nuclear fuel and develop mixed-oxide fuel (MOX) for our civilian reactors, on the grounds that plutonium was separated during reprocessing. He feared that the separated plutonium could be diverted and eventually transformed into bombs. He argued that the United States should halt its reprocessing program as an example to other countries, and he expected them to follow our lead. Unfortunately, the premise of that decision was wrong. Rather than simply accepting the U.S. judgment, other countries made their own decisions about what is safe and cost-effective. France, Great Britain, Japan, and Russia all now have MOX fuel programs.

Today, reprocessing would not make economic sense in the United States, given the current low price of fresh fuel. But the lack of reprocessing expertise in this country has limited our options for handling spent nuclear fuel and is undermining our efforts to deal with the disposition of excess weapons material as well as our ability to influence international reactor issues. Furthermore, at some point fuel prices are likely to increase again to the point where reprocessing may become economically attractive in the United States.

Controlling nuclear weapons

It is strongly in the interest of global stability to reduce the stockpile of the former Soviet Union (FSU). Many countries in the world would make a similar statement regarding the stockpile of the United States and other nuclear states. We must help ensure the best possible control over all nuclear weapons and weapons-grade materials. International control of fissile materials should minimize the potential for diversion into rogue-state weapons. We should seek to configure nuclear weapons around the globe in the most stable manner, with minimum reliance on hair-trigger responses. In both nuclear energy and stockpile issues, the nation is not moving fast enough to address real risks.

Our current stockpile size is being set by bilateral agreements with Russia. Bilateral agreements make sense if we are certain who our future nuclear adversaries will be, and they are particularly useful in forcing a transparent build-down by Russia. But our next nuclear adversary may not be Russia, and we do not want to find ourselves limited by a treaty with Russia in a conflict with another entity.

We need to decide what minimum stockpile levels we really need for our own best interests to deal with any future adversary. For that reason, I suggest that, within the limits imposed by START II, the United States move away from further treaty-imposed limitations to what I call a “threat-based stockpile.” Based on the threat I perceive right now, I think our stockpile could be further reduced. We need to challenge our military planners to identify the minimum necessary stockpile size, and that minimum size should count our present “inactive reserve” as well. In fact, our current practice of maintaining a large inactive reserve is not only expensive, it also complicates any attempt to encourage Russia to reduce its total nuclear stockpile.

Reducing stockpiles through a careful process can increase global stability. We should consider other approaches to increasing that stability. As one example, we should consider stepping back further from the nuclear cliff by “de-alerting” weapons, continuing the path started when we stopped flying nuclear-armed bombers on alert status. Furthermore, the necessity for the ground-based leg of the nuclear triad should be reexamined, given its greater vulnerability to a first strike, which encourages shorter response times for this leg under a “use them or lose them” line of reasoning.

At the same time, as our stockpile is reduced and we are precluded from testing, we must still maintain our confidence in the integrity of the remaining stockpile and our ability to reconstitute it if necessary. We are relying on the science-based stockpile stewardship program to improve our knowledge of all aspects of a weapon’s performance and to allow us to identify and correct any concerns with stockpiled weapons. This program deserves strong national support. Although cost certainly isn’t the primary driver of our stockpile’s size and composition, the actions that I recommend would make it possible to save some of the $30 billion we spend each year on the nuclear triad.

The dismantlement of thousands of nuclear weapons in Russia and the United States has left both countries with large inventories of perfectly machined classified components that could allow each country to rapidly rebuild its nuclear arsenals. As the first step in an integrated materials disposition process, both countries should set a goal of converting those excess inventories into nonweapon shapes as quickly as possible. The more permanent those transformations and the more international the verification that can accompany the conversion and control of that material, the better. Current appropriations legislation developed in the U.S. Senate’s Energy and Water Development Subcommittee, which I chair, clearly sets out the importance of converting those shapes as part of an integrated plutonium disposition program.

The National Research Council has recommended that disposition of weapons plutonium be guided by a “spent fuel” standard, under which weapons plutonium is rendered no more attractive for diversion or reconstitution into nuclear weapons than the hundreds of tons of plutonium now residing in civilian spent fuel. This standard leads to the nation’s current “dual-track” approach, which includes the use of weapons plutonium in MOX fuel for civilian reactors. (The second track of the dual-track approach involves immobilization of weapons plutonium with high-level waste.) But some critics argue against the use of MOX fuel on the grounds that this would be inconsistent with the earlier policy decision not to reprocess civilian spent fuel into MOX fuel to produce energy. They fear that use of MOX fuel derived from warheads will encourage reprocessing of civilian fuel and again raise concerns about diversion of plutonium into weapons. In reality, use of weapons plutonium as MOX fuel has no bearing on any decision to revisit reprocessing of civilian spent fuel.

I believe that MOX is the best technical solution. The economic performance of MOX, however, needs further study. Ideally, incentives can be developed to speed Russian materials conversion while reducing the cost of the U.S. effort. This is a challenging area for further work, and a solution might parallel aspects of the U.S.-Russian agreement on Highly Enriched Uranium (HEU), under which Russian HEU is being blended down to enrichment levels suitable for use as reactor fuel and then sold for civilian use.

The United States and Russia should set a goal of converting excess warheads into nonweapon shapes as quickly as possible.

Nonproliferation concerns demand more than attention to existing weapons and materials directly extracted from complete weapons. Large quantities of weapons-quality material are present in a wide range of forms throughout the FSU weapons complex. The Nunn-Lugar program provides resources for the Materials Protection, Control, and Accounting (MPC&A) program, which couples our national laboratories with FSU institutions to place these materials under effective controls. The MPC&A program is vital to avoid the movement of weapons material onto the black market. Funding for the MPC&A program is critically important, and Congress needs to carefully and fully fund this effort. The program continues to find new sources of weapons material in the FSU, and the MPC&A program must continue as a robust initiative until we have confidence that all such materials are adequately safeguarded.

A related nonproliferation concern is the possibility of a “brain drain” in which scientists from the FSU with detailed knowledge of weapons of mass destruction would be enticed to provide their services for better and more stable compensation in various rogue countries. Programs like the Initiatives for Proliferation Prevention (IPP) are focused on providing commercial opportunities for those scientists, provided that they stay at their current institutions. Each year in Congress we debate whether programs such as MPC&A and IPP are “foreign aid.” To my way of thinking, these programs are vastly different from foreign aid; they directly serve U.S. interests and should be fully funded.

Nuclear waste

The nation’s handling of nuclear waste issues is a disgrace that blocks our progress on nuclear energy. The path we’ve been following toward a permanent repository at Yucca Mountain has not led anywhere to date. It is strongly in the interest of all citizens to dispose of radioactive waste so as to ensure minimal risk to current and future generations. We need to move accumulated wastes out of populated areas into a few well-secured locations.

We’re on a course to bury all our spent nuclear fuel, despite the fact that a spent nuclear fuel rod still has 60 to 75 percent of its energy content (if we count only the fissile material; it’s higher if we count the uranium content that could be converted into fuel) and despite the fact that Nevadans need to be convinced that the material will not create a hazard for over 100,000 years, which is the period of concern identified by the National Research Council.

Reprocessing the spent fuel could help mitigate the potential hazards in a repository by separating out the long-lived fissile materials from the material destined for the repository, and could help us recover the energy content of the spent fuel, perhaps for a future use. Such reprocessing significantly reduces the volume and radiotoxicity of the resulting waste stream. Economic analysis using current fuel prices argues against the use of reprocessed fuel to produce energy, but our path toward a permanent sealed repository is precluding such analysis using possible future energy prices. Our earlier decision never to reprocess spent fuel has blocked this option.

In the short term, the nation badly needs a well-secured location for interim storage of spent fuel to avoid the current practice wherein spent fuel is stored near nuclear plants scattered across the nation. We can ensure better security and greater protection for the public by using an interim storage facility. The alternative is that some nuclear plants will be forced to shut down as they run out of storage space, a fact that is not lost on some of the antinuclear groups. I propose that we start interim storage now, while we continue the research necessary to move toward a permanent repository at Yucca Mountain. This is hardly an original thought: 65 senators and 307 representatives agreed with the importance of interim storage. So far, the administration has threatened to veto any such progress and has shown no willingness to discuss any alternatives. As we proceed toward the permanent repository, we should also study alternatives to the sealed permanent repository. Those options might lead to attractive alternatives to current ideas in the decades before we seal any permanent repository.

There may be several options that deserve to be studied. One approach would be to use spent nuclear fuel for electrical generation. A group of researchers from several U.S. companies, using technologies developed at three of our national laboratories and from Russian institutes and their nuclear navy, has proposed using an accelerator instead of a reactor to produce energy because it would eliminate the need for any critical assembly. The technique, known as accelerator transmutation of waste, would entail minimal processing, and that could be done so that weapons-grade materials are never separated or available for potential diversion. Further, this isn’t reprocessing in the sense of repeatedly recirculating fissile materials back into new reactor fuel; this is a system that integrates some processing with the final disposition.

At the end of the process, only a little material goes into a repository. What’s more, that material consists primarily of isotopes such as cesium-137 that have relatively short half-lives compared to the plutonium and other long-lived isotopes in the initial spent fuel. As a result, this material would be a serious hazard for perhaps 300 years-a far cry from 100,000 years. The developers of this technology believe that the sale of electricity might go a long way toward offsetting the cost of the system, so this process might not be much more expensive than our present repository solution. Furthermore, it would dramatically reduce any real or perceived risks from our present path. This is the type of option that I want to see investigated aggressively.

Nuclear waste issues don’t stop with high-level wastes. There is an increasingly desperate need in the country for low-level waste facilities. In California, important medical and research procedures are at risk because the administration continues to block the state government from fulfilling its responsibilities to care for low-level waste at facilities such as Ward Valley.

Understanding hazards

We regulate exposure to low levels of radiation using a so-called “linear no-threshold” model, the premise of which is that there is no safe level of exposure. This model forces us to regulate radiation to levels approaching 1 percent of natural background, despite the fact that natural background varies by more than 75 percent within the United States. Radiation control standards require that we achieve exposures under the ALARA (As Low As Reasonably Achievable) principle. That is a very expensive approach that affects every application of nuclear processes in the country.

On the other hand, many scientists think that living cells, after millions of years of exposure to naturally occurring radiation, have adapted so that low levels of radiation cause little if any harm. In fact, there are even some studies that suggest that low doses of radiation may improve health. Today, we simply do not know with confidence where the truth lies. But that truth is very important. We spend over $5 billion each year to clean contaminated Department of Energy sites to levels below 5 percent of background, and radiation exposure regulations governing nuclear power plants significantly increase the cost of construction and operation. In this year’s Energy and Water Appropriations Act, we initiated a 10-year program to understand how radiation affects genomes and cells and provided $3 million for the first year’s study. From this effort, we should finally be able to understand how radiation affects living organisms. For the first time, we will develop radiation protection standards that are based on fundamental knowledge of actual risk. From this research, we should be able to specify radiation exposure standards that ensure very low risk to the public and radiation workers, and at the same time are defensible on solid scientific foundations.

The nation’s handling of nuclear waste issues is a disgrace that blocks our progress on nuclear energy.

As one more example of an area of nuclear technology where we are making questionable decisions driven by public fears, consider food safety. Certainly a responsibility of government is to ensure maximum safety of economical food supplies. Earlier this year, Hudson Foods recalled 25 million pounds of beef, some of which was contaminated by Escherichia coli bacteria. The administration proposed tougher penalties and mandatory recalls that cost millions. But E. coli bacteria can be killed by irradiation. Furthermore, irradiation has virtually no effect on most foods. Nevertheless, irradiation isn’t widely used in this country, largely because of opposition from some consumer groups that question its safety.

There is no scientific evidence of danger. In fact, when the decision is left up to scientists, they opt for irradiation; the food that goes into space with our astronauts is irradiated. Therefore, I applaud the Food and Drug Administration’s recent decision to approve irradiation of beef products. It remains to be seen now if public acceptance can be gained for this positive step.

We are realizing some of the benefits of nuclear technologies today, but only a fraction of their potential. Nuclear weapons, for all their horror, brought to an end 50 years of worldwide wars in which 60 million people died. Today, they provide our strongest guarantee of national security. Nuclear power is providing about 20 percent of our electricity needs. Many of our citizens enjoy longer and healthier lives through improved medical procedures that depend on nuclear processes. We aren’t tapping the full potential of the nucleus for additional benefits. Many ill-conceived fears, policies, and decisions are seriously constraining our use of nuclear technologies. My intention is to lead a new dialogue to reevaluate national policies that affect the full range of nuclear technologies. Although some may continue to lament that the nuclear genie is out of his proverbial bottle, I’m ready to focus on harnessing that genie as effectively and fully as possible to deliver benefits to the greatest number of our citizens.

Welfare’s New Rules: A Pox on Children

Six decades of guaranteed government aid for economically deprived children ended on August 22, 1996, when President Clinton signed the Personal Responsibility and Work Opportunity Act. The law eliminated the open-ended federal entitlement program Aid to Families with Dependent Children (AFDC). In its place, it provided block grants to states under the new Temporary Assistance for Needy Families (TANF) program.

Two vocal camps have already boldly predicted the act’s consequences. Proponents say that new time limits on the receipt of cash assistance and sanctions for failure to comply with work and other requirements will propel welfare mothers into the labor force and produce abundant benefits. These include higher family income; more regular family routines; greater maternal self-esteem; more positive role models for children; and, in the longer run, declining out-of-wedlock teen births as children learn that welfare no longer provides a viable alternative to marriage.

Opponents are convinced that time limits and sanctions will alter the source but not increase the average level of family income. They predict increased stress on single mothers; lower quality of care of younger children; a reduction in the ability of mothers to monitor the activities of adolescents; and, for the many women likely to be unable to find and hold jobs, deepened family poverty to the point that even basic needs cannot be met, with attendant increases in homelessness, hunger, foster care, and health problems.

There is little scientific basis for either set of predictions. Rigorous random-assignment welfare-to-work experiments were conducted during the 1980s. Some featured sanctions for noncompliant behavior, but none imposed time limits, and few looked beyond work and welfare receipt to evaluate the impact of maternal employment on family process and child development. Also, it will be a few years before we have meaningful analysis from research projects funded by the legislation itself.

Recent declines in the number of welfare recipients, coupled with media accounts of hopeful mothers beginning to work after years on welfare, have created a kind of euphoria over welfare reform. Some think that few if any recipients will ever hit the new five-year time limit. More than a decade of research, however, suggests otherwise.

Families use welfare in disparate ways, and the majority of women who have received AFDC assistance are neither extremely long-term or shorter-term recipients. The current declines in welfare recipients undoubtedly reflect transitions off welfare by the more work-ready, short-term segment of welfare recipients. However, substantial numbers of longer-term recipient families remain, and it is unrealistic to believe that most will not have benefits cut off by the time limits. Still others will face benefit reductions or will lose benefits altogether because of sanctions.

Reform will indeed spur a substantial number of welfare-to-work transitions. Just as certainly, it will increase the depth of the poverty among families in which mothers cannot make successful transitions to full-time work. In other words, the 1996 legislation will have a much bigger impact on the distribution than on the average level of the economic well-being of children. Even if the total number of children living in low-income families does not increase dramatically, the gap between the family incomes of the poorest and better-off low-income children almost certainly will.

Recent studies suggest that such deepening poverty, especially if it occurs early in childhood, can have detrimental effects on the cognitive development of children. This is our greatest concern about the changes in the welfare system. However, we believe that various policy changes, such as gearing assistance programs to the needs of the youngest children, could help minimize the negative effects of welfare reform on children.

Sweeping change

In addition to eliminating AFDC, the new welfare law made changes affecting child care, the Food Stamp program, Supplemental Security Income for children, benefits for legal immigrants, and the Child Support Enforcement program. Also, it offers states numerous options, such as capping benefits so payments do not increase to recipients who have more children and denying assistance to unmarried teen parents and their children.

Another major provision has implications for the well-being of children and youth. Parents must engage in work or work-related activities to receive TANF. Those not participating in their state’s work requirements face reduction or termination of benefits. The legislation introduced two provisions linked to the length of welfare receipt. First, after 24 months, it requires recipients to participate in “allowable work activities” or face sanctions or penalties. Second, it sets a 60-month lifetime limit, regardless of work effort. This limit applies to the entire household and to all forms of federal assistance. States can impose shorter time limits on total receipt, and nearly half have already done so. For families currently receiving assistance, the five-year clock started when their state of residence implemented the federal block grant.

Early returns on the new state-designed welfare reforms appear to be stunningly positive. Caseloads have fallen dramatically in the past couple of years, by as much as 80 percent in some Wisconsin counties. Some pundits and politicians assure us that the five-year time limit will not, as some fear, deprive the least-able welfare recipients of their basic needs, because safeguards allow states to exempt up to one-fifth of their familes from these limits for reasons of hardship. By and large, they say, the cutoff threat provides just the jolt many welfare recipients need to get their acts together.

Much of the euphoria over welfare reform is unfounded. Specifically, the number of welfare recipients started to fall well before the legislation took effect. In fact, a 25-year look at U.S. welfare recipients shows little change, except for a rise in the 1990s. The drop in the past three years followed a period of soaring welfare rolls and actually represents a return to more typical levels. In Wisconsin, whose economy began booming earlier than most other states, the number of welfare recipients has indeed fallen dramatically. However, the typical state reductions are more modest.

Research by one of the authors of this article (Duncan) and colleagues Kathleen Harris and Johanne Boisjoly indicates that states should prepare for almost half of their welfare families to hit the five-year benefit cutoff within eight years. These estimates suggest that nearly two million families and four million children will accumulate five years of welfare receipt-more than twice as many as states can exempt for hardship. And although some affected families will make successful transitions into the labor force, many will not.

Photographs of former welfare mothers now hard at work make wonderful copy, but the truth is that there have always been hundreds of thousands of women making successful exits from the welfare rolls every year. Numerous studies have shown that typical periods of AFDC receipt are often quite short, lasting less than two years. For many, welfare provides short-term insurance against the economic consequences of a divorce, nonmarital birth, or job loss. The booming economy of recent years has made the transition back to work that much easier and faster.

Eclipsed by the optimistic predictions are the long-term welfare recipients, the families least likely to be able to support themselves and most likely to reach the welfare time limits. Profiles of recipients most likely to hit the five-year time limits are very similar to those drawn by earlier studies of long-term recipients: Two-thirds lack high school diplomas; a majority lack work experience; two-thirds were age 21 or younger when they started receiving benefits; and most have low levels of cognitive skills.

Iowa’s experience may illustrate the likely events. Beginning in 1993, welfare recipients there had to help formulate and then follow a “Family Investment Agreement.” Failure to comply led to a series of sanctions, including a six-month cutoff from all cash benefits. A follow-up study of sanctioned families found an almost equal split between those working immediately after the cash benefits ended and those not working. Nearly half of those sanctioned enjoyed monthly income increases averaging $500, but fully half suffered drops averaging nearly $400. As with welfare recipients in general, the heterogeneity of these Iowa families is key to understanding the consequences of sanctions and time limits. Roughly half of recipients may indeed respond quite successfully to sanctions but the other half will not.

Within eight years, nearly two million families and four million children will hit the five year benefit cutoff.

It is far too early to tell exactly how children’s family incomes will be affected by welfare reforms. Much will depend on how states use their new-found freedom to design replacement programs as well as on the economic fortunes of the nation and individual regions. From the point of view of children, the most important aspects of the new legislation are the various provisions that render families ineligible for receiving benefits, including time limits and sanctions for families that do not abide by the work requirements and other rules.

Consequences of child poverty

Although the literature on the effects of poverty on children is large, it has major shortcomings. Information on some topics is old or from studies narrowly focused on local communities, or it is of limited usefulness because children’s circumstances or outcomes are measured imprecisely. For example, although income and social class are far from synonymous, some studies used variables such as occupation, single-parenthood, or low maternal education to infer family income. Because family incomes are surprisingly volatile, there are only modest correlations between economic deprivation and typical measures of socioeconomic background.

Several longitudinal data sources do collect the requisite information, making it possible to distinguish between the effects on child development of income poverty and of its correlated events and conditions. The distinction is crucial, both conceptually and because welfare reform has a much bigger impact on family income than on correlates of poverty such as low levels of schooling or lone-parent family structure.

Research focused on isolating the impact of poverty as such suggests that family income has at times large but rather selective effects on children’s development. Most noteworthy is the importance of the type of outcome being considered. To the extent that family income influences children’s development, most affected are children’s ability and achievement rather than behavior, mental health, or physical health. Also important is the childhood stage. Family economic conditions in early childhood appear to be far more important in shaping ability and achievement than they are later.

A forthcoming study by Duncan and colleagues illustrates this. Controlling for income later in childhood as well as for demographic characteristics of households, it estimates that a $10,000 increment to income averaged over the first five years of life for children in low-income families is associated with a 0.81-year increment in completed schooling and a 2.9-fold increase in the odds of finishing high school. These were much larger than the effects of income measured later in childhood. Moreover, a similar picture of income effects emerges in a comparison of sibling differences in family income and completed schooling. This suggests that income differences, rather than unmeasured persistent family characteristics, cause the achievement differences.

Other studies have found associations between income and school-related outcomes prior to high school, including children’s achievement and cognitive and verbal ability test scores. They controlled for conditions such as maternal education, maternal age at child’s birth, single parenthood, and employment. Although transient poverty has effects, the most pronounced effects were for children who were persistently poor over multiple years, even though effects of transient poverty were also seen. Also, the lowest scores were for extremely poor children (50 percent of the poverty threshold). Early cognitive, verbal, and achievement test scores strongly predict completed levels of schooling. The few studies of this show that poverty exerts much of its influence upon high school completion through its effect upon early test scores.

How poverty acts

Four key aspects or pathways illustrate how income affects children and where policy intervention may be effective: quality of the home environment, school readiness, parental health, and neighborhood. In the home, warmth of mother-child interactions, the physical condition of the home, and especially opportunities for learning account for much of the effects of family income on cognitive outcomes in young children. Differences in the home learning environments of higher- and lower-income children account for up to half of the effect of income on the cognitive development of preschool children and between one-quarter and one-third of the effect of income on the achievement scores of elementary school children. These findings are based on several large and in one case nationally representative samples of children and their families. The learning environment includes such factors as access to a library card, the practice of reading to the children, availability of learning-oriented toys and experiences, and use of developmentally appropriate activities.

Recent research shows that economic deprivation early in children’s lives is most harmful to their chances for achievement.

School readiness depends also on the quality of the care children receive outside the home. High-quality developmentally appropriate early childhood education in the toddler and preschool years is associated with enhanced school readiness for poor and middle-income children alike. In addition, early childhood education programs for poor children increase verbal ability and reasoning skills through the early elementary years. Such programs may also decrease behavior problems and increase persistence and enthusiasm for learning.

For adolescents, family economic pressure can lead to conflict with parents, resulting in lower school grades, reduced emotional health, and impaired social relationships. Indeed, studies suggest this conflict may result from income loss or economic uncertainty due to unemployment, underemployment, and unstable work conditions, rather than from poverty or low income per se. In addition, parents who are poor are likely to be less emotionally and physically healthy. Parental mental health accounts for some of the effect of economic circumstances on child health and behavior and is associated with impaired parent-child interactions and fewer learning experiences in the home.

Finally, neighborhoods affect child and adolescent test scores and high school completion independent of family income. Poor parents are constrained in their choice of neighborhoods and schools. Low income may lead to residence in extremely poor neighborhoods characterized by social disorganization (crime, many unemployed adults, and neighbors who do not monitor the behavior of adolescents) and few resources for child development (playgrounds, child care, health care facilities, parks, and after-school programs). Families with preschoolers in poor neighborhoods tend to have fewer learning experiences, over and above the links seen between family income and learning experiences.

Judicious exemptions

Most worrisome among welfare reform provisions are time limits, sanctions for noncompliant behavior, and categorical restrictions on eligibility that drop cash assistance to zero. Some families hitting the limits or losing benefits when sanctioned for not following program rules will replace the lost welfare payments with income from work and other sources. Others, perhaps as many as half, will see their incomes fall well below the poverty line. State-specific provisions that deny cash assistance to children born to underage, unmarried women also lower dramatically the incomes of a subset of affected families.

When viewed with the welfare of young children in mind, time limits pose less of a threat than sanctions and categorical restrictions, especially in states that opt for the full five-year time limits. In those states, mothers are not likely to have young children in their households after the 60 months, unless they have additional children during that period. In contrast, sanctions and many of the categorical provisions are much more likely to deny benefits to families with very young children. Not only do young children appear to be most vulnerable to the consequences of deep poverty, but mothers with very young children are least able to support themselves.

States should exempt families with children up to ages two or three from welfare’s new rules.

An obvious recommendation is that states exempt families with young children from time limits, sanctions, and categorical restrictions. Some states now exempt families for up to a year after a child’s birth. Granting exemptions until a child’s second or third birthday would be far preferable. States could also establish more universal programs, such as a child allowance or refundable tax credits geared to children’s ages. Those who fear that such policies create incentives for mothers to bear additional children should be aware that evidence suggests at most weak links between fertility and the generosity of welfare benefits.

Several European countries gear time-limited benefits to the age of children in their assistance programs. In Germany, a modest parental allowance is available to a mother working fewer than 20 hours per week until her child is 18 months old. France guarantees a modest minimum income to most of its citizens, including families with children of all ages. Supplementing this basic support is the Allocation de Parent Isolé (API) program targeted at lone parents. Eligibility for generous income-tested API payments to families with children is limited to the period between the child’s birth and third birthday, even if low-income status persists beyond that point. In effect, API acknowledges a special need for income support during this period, especially if a parent wishes to care for very young children and forgo income from work. The elaborate state-funded system in France for providing child care beginning at age 3 lessens the problems associated with the parent’s transition into the labor force.

Yet another strategy is to liberate long-term recipients from welfare through a combination of cost-effective job-training and other skill-building programs. Also important are efforts to make work pay by increasing the after-tax family incomes of women who take low-wage jobs and by funding work-for-welfare jobs of last resort for those who are unable, despite effort, to find an employer.

If the goal is to promote the healthy development of children, we must go beyond cash transfer to service-delivery programs such as nutrition education and nutritional supplements, medical care, early childhood education, and housing. Existing programs in these areas could be expanded. The case for giving preference to such programs over income transfers is strongest for those addressing health and behavior, because there is little evidence that outcomes in these domains are responsive to improvements in family living standards.

Because of the relationship of home environment to cognitive ability, interventions might profitably focus on working with parents. An example is the Learningames curriculum, developed by Joseph Sparling and Isabelle Lewis at the Frank Porter Graham Center at the University of North Carolina at Chapel Hill. In the curriculum, parents receive instruction, materials, and role-playing practice in providing learning experiences. Programs that focus on teaching parenting skills and on encouraging and modeling reading skills alter parenting behavior as well as child language and school readiness. To be effective, home visits focusing on parenting skills have to be frequent (at least several times per month) and extensive (several years in duration) and have specific curricula focused on behaviors and interactions. More generally, economic logic requires a cost/benefit comparison of the various development, income transfer, and service delivery programs.

It will be years before we have a definitive accounting of the long-run effects of the 1996 welfare reforms. These reforms will almost certainly increase both the number of successful transitions from welfare to work and the number of severely economically disadvantaged children. Recent research shows that economic deprivation early in children’s lives is most harmful to their chances for achievement. Policies aimed at preventing either economic deprivation itself or its effects are likely to constitute profitable social investments.

Is Anybody Buying Policy?

Technology Review, the venerable magazine published by the MIT Alumni Association, has decided that policy doesn’t sell. At least since 1967, when John Mattil became editor, Technology Review has devoted itself to exploring the social and political implications of science and technology as well as reporting on the steady march of progress in human discovery and invention. Mattil and his successors, particularly Steven Marcus, who was managing editor under Mattil and became editor after a stint as editor of Issues, succeeded brilliantly in creating a magazine that enlightened us about developments in science and technology at the same time that it stimulated us to think critically about their use. But no more.

Marcus and most of the staff have been dismissed. John Bendit, the new editor, told Kim McDonald of the Chronicle of Higher Education that in the future policy articles written by experts will be replaced by reports on technological innovation written by journalists. Bendit and publisher R. Bruce Journey hope to double the magazine’s circulation in the next few years and reduce the subsidy provided by the alumni association. Certainly there’s nothing inherently wrong with publishing a magazine that enables readers to keep up with progress in technology. The audio, computer, and car magazines have been very successful at tracking consumer technology. Scientific American does a distinguished job of explaining technology as well as science to a broad audience. But that should not be every magazine’s mission.

Technology Review has learned the same lesson that we have learned at Issues: Serious and informed discussion of S&T policy is not a mass market commodity. We’ll print about 15,000 copies of this issue. That’s two orders of magnitude less than Soap Opera Digest, Golf Digest, or Martha Stewart Living. Family Handyman and Sesame Street Magazine print more than a million copies of each issue; Hot Rod and Weight Watchers are close to that. Even Becket Baseball Card Monthly, which claims to have about 350,000 subscribers, is in a different universe. Of course, although we have all nourished the hope that a broad public would develop an interest in S&T policy, neither magazine was created to be a profit center.

From a business perspective, the Technology Review decision is good for Issues. Although there have always been obvious differences between the magazines, Technology Review was in some ways the magazine most like Issues. It was the only magazine bigger than Issues that devoted significant attention to S&T policy issues for a broad audience. Now we won’t have to compete for authors or readers who care about policy. But that is small comfort. The National Academy of Sciences and the University of Texas at Dallas are not focused on market share; they seek to expand the market for ideas. They support Issues because they believe that the nation needs a place to debate S&T policy issues. They want to help create a public that is better informed and more involved in the development and use of scientific and engineering knowledge. Technology Review was engaged in the same quest, and it seemed fitting that the nation’s premier engineering school would be supporting that effort.

We would like to see more places to discuss S&T policy so that more viewpoints would be represented and more readers would be drawn into the discussion. It was discouraging when Congress expressed its lack of interest in substantive discussion of S&T policy by eliminating its Office of Technology Assessment. What are we to think when MIT loses interest?

An Economic Strategy to Control Arms Proliferation

For 45 of the past 50 years, defense budgets were largely decoupled from economics. Vast expenditures on defense during the Cold War were debated and decided in a compartmentalized fashion, separated intellectually and institutionally from debates over economic policy. The life-and-death pressures of a nuclear arms race with the former Soviet Union clearly trumped periodic concerns about the price tag attached to various pieces of our defense establishment, as well as occasional conflicts with other international economic or foreign policy interests. When it came to buying national security for the United States, money was no object.

Since the end of the Cold War, however, a lot more attention has focused on price tags. Procurement outlays on major defense systems in the United States have fallen by about 40 percent from their peak in the 1980s, and spending by our allies has declined by even greater amounts. Other military establishments around the world have ratcheted down their force structures and spending by equally significant amounts.

As a result of this heightened attention to the bottom line, the impacts of arms exports on the costs of maintaining an economically viable defense industry has over the past five years begun to play a growing role in decisions by the United States and its allies to export high-tech weapons systems. Linkages between the economics of the maintenance of national defense establishments and political-military security issues are clearly visible as never before. Nowhere are the connections between the two more obvious and challenging than in East Asia, where the principle (though often not the practice) underlying U.S. policy is to keep economic and security relationships on nominally separate tracks.

Emerging as the sole global military superpower (1997 U.S. spending on R&D and procurement of weapons systems was roughly equal to that of Europe, Japan, Russia, China,Iraq, Iran, and North Korea combined), the United States finds itself in a curious and contradictory position today. We continue to subsidize our allies’ defense industries through a network of relationships developed during the Cold War, but compete against these same allies for sales in third-country markets. Our allies, pressed by the huge fixed costs of maintaining their defense industries, increasingly turn to questionable customers abroad in an effort to successfully compete against the giant U.S. companies that today account for half of global sales. These same U.S. companies press an increasingly cost-conscious Pentagon to support them in competing for exports to some of these marginal customers, arguing that equally capable European systems will be sold if U.S. systems are not allowed to enter the competition and that the United States should aggressively try to capture the benefits of larger production runs and scale economies through these exports.

The upshot is that the United States today is in an indirect arms race with itself-or more directly, with foreign allies with whom it cooperates technologically and competes economically. In the medium to long run, there are some real dangers to confront if we stumble unthinkingly down this road. The world could become a more dangerous place as more advanced military technology developed on the U.S. taxpayer’s nickel leaks into global markets more quickly. And the U.S. may find itself in the position of having to significantly increase its future military spending in order to deal with high-tech U.S. weapons technology that has been distributed too widely and too quickly. But neither of these outcomes should be viewed as inevitable.

The economic fundamentals of our quandary reside in the cost structure of many key, high-tech defense industries, where system costs are dominated by various economies of scale-in assembling and sustaining essential design capabilities, in systems R&D, in start-up costs, in production capacity, and in learning curves. The price of entry into development and production of the most advanced weapons systems is a large fixed investment, with unit costs declining sharply as the scale of production increases.

A fundamental element of the national security policy of many nations (including most U.S. allies) is the creation and maintenance of their own ability to produce at least some advanced weapons systems. During the 40 or so years of the Cold War, defense spending was large enough in most countries with pretensions to producing advanced weapons to enable production of these systems in sufficient volumes to at least approach affordability. With the widespread decline in national defense budgets, however, the only way in which many nations will be able to maintain a viable industry is by exporting a much larger portion of their output to overseas customers. This is true in Western Europe, where despite trans-European defense industrial consolidation and halting steps toward a single European defense market, tremendous economic pressures to export leading-edge systems outside the NATO alliance remain in force. It is equally true in Japan, where with the active support of the Ministry of International Trade and Industry, Japan’s defense industry is currently mounting a campaign to relax current policies prohibiting defense exports. It is even true in the United States, where, since 1995, formal conventional arms transfer policy for the first time has explicitly recognized economic impact on the domestic industrial base as a considerable factor in decisions on arms exports.

The Big Picture

In even the medium run, lessened inhibitions on the export of advanced weapons-and increased competition for these sales among the United States and its allies-may have significant effects on the political and military balance in many regions, and East Asia may very well prove to be the test case measuring how well we can manage these new realities. In the long run, because retention of a significant technological advantage over adversaries is critical to U.S. military strategy, proliferation of advanced capabilities through the export of weapons by our allies may ultimately be the threat forcing us to once again increase our own defense spending and accelerate development of new generations of systems at a time when budget realities give us little margin for doing so without sacrificing other national priorities.

One excellent example of this phenomenon is use of the so-called “grey threat” (as a recent RAND study described it) to justify rapid development of the F-22 fighter. Proponents of the new fighter argue that the imminent production of European fighters such as the Eurofighter, Rafale, and Gripen that begin to approach the quality of current U.S. front-line fighters and the necessity for the Europeans to export these aircraft to reduce their unit cost make it likely that our forces will need to deploy even more advanced fighters in the not too distant future in order to guarantee the assumed substantial margin of superiority over aircraft in the hands of conceivable adversaries. Indeed, once it seems likely that our allies will be willing to sell a relatively potent system to a foreign buyer, there is a considerable argument for supporting the sale of our own equivalent system on the grounds that we might as well reap the political and economic benefit and the advantages of a closer military relationship for ourselves. In effect, given sufficient competition from our allies, there is a perverse but compelling logic to us becoming our own “grey threat.”

Thus, there is a complex self-reinforcing dynamic at work. With declining defense spending, exports have become critical to the very survival of most defense industries outside the United States. Retention (or creation, in some cases) of economically viable, indigenous defense systems capabilities is viewed as fundamental to national security in many nations, which leads to aggressive economic competition for defense export opportunities. The increasing economic pressure to export ever more advanced capabilities, in turn, may alter delicate strategic balances in sensitive regions. Changes in the strategic balance may trigger even greater or wider interest in acquiring advanced systems and ultimately create more pressure to accelerate the pace of development of new systems by the most advanced military powers. One can dimly imagine two possible new equilibria: a regime with much higher levels of defense spending, where the economic pressure to export the most advanced capabilities has subsided to more manageable levels, or the construction of a more cooperative regime for arms sales, where the handful of military powers with any realistic potential to develop the most advanced military systems agree to some degree of mutual restraint on exports to third parties, perhaps in exchange for some program of industrial and technological cooperation that ensures the survival of core defense industrial capabilities deemed essential to national security. The latter idea has gone by various names-a suppliers’ cartel, an inner circle, etc.- and is probably best viewed as an experiment to be pursued, rather than a crystal clear vision of a particular endpoint. The East Asian defense market could provide an excellent opportunity for testing this approach.

The United States today finds itself in an indirect arms race with itself-or more directly, with foreign allies with whom it cooperates technologically and competes economically.

The competitors

Three countries supply most of the advanced weapons systems to East Asia: the United States, Europe (which in practical industrial terms, is beginning to look like a single European conglomerate in many, though not all, defense sectors), and Russia. Japan has capabilities in defense systems that are highly advanced, but until now has enforced a self-imposed ban on exports. China does not produce the most sophisticated systems but is an important exporter of middle and low-end equipment.

U.S. industry is the 600-pound gorilla, accounting for about half of worldwide arms transfer deliveries. At least one reason for this is very simple: The United States depends far more on developing new technology and systems. U.S. defense R&D spending accounted for 70 percent of the total 1994 defense R&D spending by the United States, its NATO allies, and Japan.

The United States also consumes almost half of the defense goods acquired by this group. Japan is second with 16 percent of the total. Indeed, given the dominant U.S. investment in R&D, the real question is why the United States has only a 50 percent share of global defense trade. Is the United States that inefficient? Are the performance advantages of U.S. systems that much more costly at the margin?

Casual observation suggests that the United States is not grossly less efficient than its allies, and although the squeezing out of marginal performance advantages on the bleeding edge of the technological frontier may be disproportionately costly, this too seems unlikely to explain the bulk of this gap. Rather, it seems likely that the United States, through a variety of policy choices, has in effect subsidized the development of high tech weapons systems by its closest allies. The mechanisms have included a deliberate policy of liberal and inexpensive technology transfer to allies through coproduction, licensed production, and codevelopment programs, and a variety of policies such as waiver of recoupment of R&D charges on export sales of components and systems, intellectual property policies, and so on that make it possible for foreign competitors to acquire some of the key components of high tech weapons systems at prices that may approach their marginal cost of production.

Buying a U.S.-built radar design at only a modest premium over production cost and inserting it in a European fighter, for example, allows the European systems integrator to market a state-of-the-art platform without investing in an enormously costly development effort on the radar subsystem. Having a U.S. defense contractor work as a joint venture partner on your air-to-air missile may provide you with access to technologies developed at great U.S. taxpayer expense.

This is not to say that doing this is irrational from the U.S. perspective. Strengthening its allies militarily (including their industrial capabilities) is a security interest of the United States that was given priority over possible implications for longer-term economic competition during the Cold War. Often, the allies built protective walls around their defense markets, and giving them access to U.S. technology was part of the price for slipping over the walls. The decision was an economic one; selling them something, with some return, was better than selling them nothing, and earning no return on technology that in any event had already been paid for. The decision also reflected a political judgement; industrial cooperation strengthened these alliances. And the decision was a military one; given that these countries would be fighting side-by-side with the United States, why not give them the same equipment to use in order to build greater operational military coherence?

The structure of incentives within the U.S. acquisition system was another factor promoting bargain-basement technology transfer to allies. U.S. defense contractors were, after all, contractors. The costs of technology development were funded primarily by the taxpayer. Unlike the situation in commercial high tech, a company did not have to define a pricing structure for its output that allowed for a reasonable return on technology investments to be recovered, in order to remain viable. Furthermore, because there often were competing U.S. contractors able to offer competitive solutions, foreign governments, with considerable monopsony power, were able to play them off against one another in order to negotiate the most favorable possible terms in acquiring U.S. technology. Because the government was forbidden from favoring one contractor over another in competing for foreign sales, U.S. policy did nothing to improve the bargaining position of U.S. firms.

U.S. contractors, of course, always had their own economic self-interest to guide their decisionmaking. If a company decided, for example, to transfer technology representing a taxpayer investment of $4 billion to Japan for $800 million in licensing fees, it presumably was making the judgement that in the long run its potential return on sales lost to future Japanese competition making use of those technologies was valued at less than $800 million. But if government investments in similar technologies were also earning returns for other U.S. companies, it is easy to see how the company’s calculation of a floor on what it would be willing to accept for use of the technology might logically diverge from a national calculation.

In short, the structure of the U.S. acquisition system naturally lends itself to speculation that the United States is shouldering much of the burden of development cost for systems procured and built by its allies. That is, U.S. policy, in addition to underwriting the cost of sustaining the most formidable and effective defense industry in the world- its own- also in effect underwrote its own industry’s principal competitors. U.S. policies supporting defense exports are least a part of this story.

The market

In part because of its emergence as a potential focus for regional military competition and in part because of the rapid economic growth that has enabled rapidly growing military expenditures, East Asia’s importance in global defense markets is increasing quickly. As recently as the 1992-94 period, for example, Arms Control and Disarmament Agency (ACDA) statistics show East Asia accounting for about 15 percent of global arms transfer deliveries. Intelligence community estimates of future requirements suggest that over the remainder of this decade, the East Asian share of deliveries will double to 30 percent, roughly matching regional markets in the Middle East (30 percent) and Europe (27 percent). Current data seem to show this forecast on track, with East Asia accounting for 20 percent of world defense expenditure in 1996 and almost half of global sales of large conventional arms. Although the recent economic turmoil in East Asia may dim the prospects for such sales over the next several years, the structural forces promoting a regional arms race (the growing role of China as a military power, continuing tensions on the Korean peninsula, unresolved territorial claims and boundary disputes, and a historical legacy of rivalries and conflicts) are likely to fuel a continuing regional military competition into at least the first decades of the next century.

The bulk of the East Asian arms market largely consists of four relatively large national markets, and three considerably smaller ones. The four big markets are South Korea (24 percent of deliveries during 1992-94), Taiwan (20 percent), Japan (15 percent), and China (14 percent). The smaller players are Thailand (7 percent), Singapore (5 percent), and Malaysia (4 percent).

Although the U.S. share of the East Asian regional market (54 percent) is only slightly higher than its share of the world market (50 percent), there is considerably greater variation within individual national markets. The United States was the overwhelmingly predominant supplier in Taiwan (100 percent of deliveries over 1992-94) and Japan (97 percent), but accounted for less than half of sales in Korea (with the balance going mainly to European suppliers).

Taiwan, Japan, and Korea all have intimate security relationships with the United States, but Korea has been more disposed to trade increased technology transfer for performance, favoring suppliers able and willing to transfer a greater measure of technology over those offering the best-performing products. Other systems that the Koreans have purchased, such as German diesel submarines, simply do not have competitive U.S. suppliers. U.S. laws, which prohibit foreign corrupt practices, may also have discouraged Korean sales, if recent public trials of government officials are indicative of the procurement culture.

China buys 97 percent of its advanced arms from Russia, whose willingness to sell advanced weaponry to a neighbor with which it has had occasionally antagonistic confrontations is clearly related to the dire economic straits that its armaments industry now faces. But in international meetings, knowledgeable Russians have also suggested that hardliners within the Russian military increasingly see the China relationship in strategic terms as an offset to a reinforced U.S.-Japan security partnership.

All four major East Asian markets are actively seeking to use their defense systems markets as a tool in building up their aerospace industries. The distinction between commercial and defense applications of these technologies is often a very blurry line. It is no secret that all of these countries aspire to become world class builders of air and space systems, and civil and military programs are frequently intermingled. For economic reasons, then, as well as because of political and military rivalries, the nations of East Asia are likely to continue to be major customers for weapons systems and defense technology. And for economic reasons, too, those who have provided them with advanced capabilities will probably continue to sell them what they seek.

With declining defense spending, exports have become critical to the very survival of most defense industries outside the United States.

Status quo

U.S. policy supports defense exports through three principal avenues:

Granting of export licenses. Weapon systems and major system components are all subject to export control. In principle, licenses are granted only when it is in the security interest of the United States, but an explicit recognition of arms exports’ role in strengthening the U.S. industrial base was added by the 1995 Clinton administration conventional arms transfer policy. There are no broad criteria or principles that guide decision-making on license applications; the methodology is explicitly case-by-case, with no guarantee of logical consistency within or across regions. There has been some discussion but no broad implementation of general guidelines that would specify under what circumstances and to what nations differing levels of advanced technology could be released, as a tool to improve the consistency and coherence of the licensing process.

What actually happens is that a nation requests a license to learn about or actually buy something (which not infrequently follows informal contacts with a U.S. contractor wishing to sell it), and that is followed by an interagency review process in which Defense, State, Commerce, ACDA, Energy, the intelligence community, and possibly the National Security Council can play significant roles. The agencies not infrequently have different views (economic and trade interests vs. security considerations vs. proliferation concerns vs. diplomatic issues) and as the Presidential Advisory Board on Arms Proliferation Policy observed in its July 1996 report, “Bureaucratic warfare rather than analysis tends to be the modus operandi in what is often a protracted process of plea bargaining and political compromise that may not reflect long-term national objectives.” Needless to say, the significant potential for uncertainty and delay built into this process-albeit now much improved from a business perspective-can remain an obstacle to exports.

Congressional prohibitions have placed further restrictions on policymakers in specific cases of regional arms transfers. On the other hand, one can argue that with the increased recognition of economic benefit as a legitimate arms export policy objective, the system has been gradually tipping toward a presumption that, except in the case of particularly disreputable would-be customers, if some country is able and willing to sell a particular capability to a buyer, then it might as well be the United States.

Diplomatic and administrative support. As the U.S. foreign diplomatic infrastructure became aware that encouraging exports was a priority of current government policy, a greater involvement in even-handed support to U.S. contractors in winning competitions for military exports developed over the past few years. The support took the form of sharing unclassified insights on what is going on within often-opaque budget planning in foreign governments, U.S. embassy officials lobbying local government officials, U.S. military personnel lobbying foreign militaries, and senior political appointees lobbying their foreign counterparts. In my experience, this has been perhaps the most important and effective element of U.S. policy support for military systems exports.

On the other hand, I have also observed questionable excesses. One good example occurred in East Asia, where an ambitious young ambassador, with minimal interaction with local U.S. military staff but presumably greater contact with the U.S. contractor eager to make the sale, was pressing local defense officials hard to buy an advanced military helicopter. Behind the scenes, senior military staff from the U.S. Pacific Command were scratching their heads in befuddlement, observing that the local military was still struggling to master tons of recently acquired equipment. Furthermore, what was the country’s neighbor-also a U.S. ally-going to think of this proposal? The operative policy seemed to be that if some enterprising salesman, official or unofficial, convinced the locals that they wanted something, then it might as well be the United States that does the selling.

Financial subsidies to exports. U.S. defense contractors have lobbied successfully for some new financial supports for defense exports by arguing for a “level playing field.” The leveling argument has both a domestic component (armaments should receive the same kind of treatment that other goods receive) and a foreign component (foreign governments give their firms financial support in exporting, and therefore we should too).

This logic is attractive at first glance, but it has two problems. The first is the implicit assumption that defense exports are, putting aside the special nature of their customers and application, like other traded goods. The second is the assumption that broadly-focused export subsidies are likely to be a cost-effective tool for increasing export sales.

Generally, weapons systems are not like other traded goods in that a national security exemption has exempted them from the subsidy and antidumping disciplines of the General Agreement on Tariffs and Trade (GATT). Thus, although it is true that producers of industrial goods making use of R&D funded by other government agencies are not forced to pay an R&D recoupment charge (charges to foreign customers covering a portion of the government’s investment in R&D), the extent to which those export sales of goods can be subsidized by government are severely limited by the ability of foreign competitors to seek countervailing duties and antidumping orders. No such restraints apply to weapons systems, which are presumed to be covered by the national security exemptions in the GATT. In fact, you can reasonably argue that “dumping” (pricing exports below full average cost of development and production) is normal practice in international competition in defense systems.

U.S. defense contractors have sought the waiver of R&D recoupment charges, a policy that would have some particularly important economic implications in calculating the economic benefit to the Department of Defense (DOD) from defense exports. First, it means that the benefits will be felt mainly through cost declines derived from improved economies of scale and progress along the learning curve (and possibly through avoidance of shut-down and start-up costs when exports keep lines “warm”) rather than through spreading of R&D costs over a larger output. Second, as already mentioned, it means that foreign users of defense components have potential access to U.S. technology at marginal cost, enabling them to be competitive in systems where they might otherwise be unable to compete against U.S. producers.

R&D recoupment charges have been waived since 1992 for commercial sales. For foreign military sales made on a government-to-government basis, DOD has long had the discretion to waive R&D recoupment charges on sales to NATO, Japan, Australia, and New Zealand and has routinely done so. In 1996, Congress granted authorization to do so in other cases.

Doubts about the efficiency of general subsidies as a tool to promote defense exports are raised by an analysis of actual markets for defense systems. DOD’s 1994 forecast of arms exports divided arms deliveries into two categories: goods already under contract for future delivery and goods not yet under contract. The global split for worldwide arms trade for 1994-2000 was about 50/50 in these two categories.

Within the “not yet under contract” category, foreign purchases were divided into three categories: where the United States was the only source for the system the customer was likely to specify; where the United States was not in competition (it did not produce an equivalent product or did not sell to a particular customer as a matter of foreign policy); and where the United States was in competition with other foreign arms producers. Of deliveries made and anticipated during 1994-2000, only 11 percent are in the third category, and 48 percent are in the first. Thus, U.S. arms exports would be at least 48 percent of world sales over this period, and at most 59 percent. With such a small part of the market in play, an efficient export subsidy policy should be selective, picking customers and sectors where real competition is in evidence and where it is likely to have a significant impact on DOD’s industrial base.

The same analysis also suggested that arms exports to East Asia were not likely to be a particularly fertile orientation for an export-promotion policy. Of the “competitive” market opportunities available to U.S. firms, only 7 percent of possible sales over the 1994-2000 period were found in East Asia.

An “inner circle” approach

With defense downsizing in full swing around the globe, all major producers of high-tech armaments other than the United States face a virtual economic crisis in their defense industries. Unless they are willing to give up maintenance of a national capability to produce advanced military systems as a national security objective (exceedingly unlikely), they will be pushed to close off their national markets to foreign-built systems and dramatically increase exports. In the long run, this is likely to raise significant problems for the United States. In East Asia, unrestrained proliferation of advanced conventional military capabilities is likely to further aggravate what already appears to be one of the most difficult future areas for U.S. foreign policy.

It is simply unrealistic to suppose that the European countries will give up their ambitions to maintain their own defense industrial base.

The obvious alternative is to work through some sort of system of industrial and technological cooperation with major U.S. allies (Europe and Japan) that will maintain access by U.S. defense producers to these important markets while permitting our allies to maintain a core defense systems capability and will restrain the unchecked proliferation of advanced systems exports. As the only nation that can maintain an economically affordable advanced defense sector without relying on exports, the United States must play a leadership role in constructing such a system. The massive investment in military technology, which in effect underwrote the development of allied industrial capabilities in the first place, continues to provide enormous leverage for this purpose.

One approach would be to encourage the formation of what might be called an “inner circle” of arms producers. To put it most bluntly, the idea would be to focus on controlling diffusion of the most advanced capabilities, where there are really only a handful of countries capable of producing and marketing sophisticated weapons systems. An inner circle of close U.S. allies would be given access to the U.S. defense market, and the United States access to their markets, as part of an agreement to work together on joint development and production of advanced systems for use within the limits of this narrowly defined “common market.” In exchange for being given access to the U.S. market and selected U.S. defense technologies, participating countries would accept negotiated restraints on exports of systems and technologies developed within the “inner circle” to those outside.

In this way, two potent economic incentives (access to U.S. technology and markets) would be combined so as to support two major U.S. foreign policy objectives: restraint on exports of the most advanced weapons systems and closer military cooperation and cohesion with U.S. allies. It is simply unrealistic to suppose that the European countries will give up their ambitions to maintain their own defense industrial base. Without something like the inner circle to guarantee the economic viability of European defense capabilities, their only alternative will be to export in a fairly indiscriminate fashion to relatively dubious customers.

An additional advantage of the inner-circle approach is that it can be defined and refined incrementally. In the beginning, its domain could be quite narrow, negotiated on a case-by-case basis. For example, the United States could initially experiment with this approach in very specific systems-ballistic missile defense systems, stealth cruise missiles, or stealth radar-with a handful of very close allies. Given a track record of initial success, it could then be expanded to cover additional types of systems and eventually, perhaps, become an integrated framework covering production and export of a whole range of advanced weapon systems.

Incremental expansion could also cover new categories of membership, so that instead of having a single inner circle, the system could be more like concentric circles. Close allies would have the greatest degree of access, and the greatest degree of restraint would be imposed. The outermost circle would include virtually everyone but also be associated with the least forceful restraints-an expansion and elaboration, perhaps, of current global agreements covering international export of sensitive military technologies and weapons of mass destruction. Intermediate levels of inclusion and restraint between these two limits could be negotiated where it made sense: bringing Russia into the fold, for example, or covering sensitive but somewhat more widely diffused advanced military technologies mastered by a larger number of players. In short, the inner circle idea could be viewed as an experiment rather than an endpoint: a graduated and progressive construction of an international regime blending restraint and cooperation in military weapons systems production and sales, using pragmatic and selective principles for inclusion of participants and technologies.

Some might argue that this is an impractical and utopian approach that would never survive in the rough and tumble of the real world. In fact, however, “impractical” restraints on export of components and systems for missiles and weapons of mass destruction, though far from perfect, currently serve us well in reducing the dangers from proliferation of these systems. And there are real examples where an incremental inner-circle approach has shown itself to be practical. When the United States, Germany, France, and Italy sat down and agreed in 1995 to pool funding and technologies in a cooperative development program for a common theater missile defense system-the Medium Extended Air Defense System (MEADS)-all four partners agreed that no export sales could take place without common assent. Though France later dropped out of this program when it finally felt the pressures of defense budget cuts in 1996, MEADS showed that export restraints linked to a sharing of funds, technology, and production in a common acquisition program can be negotiated successfully and with strong-willed and independent partners. Out of such incremental first steps, an inner circle of armaments cooperation and export restraint can gradually be built and later expanded.

One thing is certain. Weapons exports are part of a global economic reality. Inducing others (Europe, perhaps Japan, and ultimately Russia) not to engage in irresponsible proliferation of advanced capabilities in Asia requires some global cooperation in constructing a regime that permits legitimate national security interests in maintaining defense establishments and curbing uncontrolled proliferation to be reconciled with the economic realities of a high-tech defense industry that is fundamentally global in outlook.

Forum – Winter 1998

Cautious arms control

William F. Burns’ “The Unfinished Work of Arms Control” (Issues, Fall 1997) summarizes the key points of one side of the debate over the role of nuclear weapons in the post-Cold War world. I believe that some of the measures Burns and others have advocated will have the unintended effect of increasing the likelihood that weapons of mass destruction will be used against the United States or our troops deployed abroad. By adopting a permanent ban on underground nuclear testing and foreswearing the possibility of first use of nuclear weapons, the United States will greatly undermine the value of its nuclear deterrent.

Our experience in the Gulf War shows how the current policy, which allows for the possibility that the United States would use nuclear weapons to respond to a nonnuclear attack, saved lives by deterring such an attack. Before the war, President Bush, Secretary of Defense Cheney, and other senior officials warned Iraq that if it used chemical or biological weapons, the U.S. response would be “absolutely overwhelming” and “devastating.” Iraqi officials later confirmed that these statements deterred Iraq from using chemical and biological weapons, because Baghdad had interpreted U.S. threats of devastating retaliation as meaning nuclear retaliation.

It stands to reason that for this first-use threat to hold water, the United States needs to maintain a credible nuclear capability. The Comprehensive Test Ban Treaty, which has been submitted to the Senate for ratification, prohibits underground nuclear testing and will substantially undermine the safety and reliability of our nuclear arsenal. Over time, nuclear materials and high-explosive triggers in our weapons deteriorate and we do not have experience in predicting the effects of such degradation. The fact that U.S. nuclear weapons are the most sophisticated in the world, coupled with the need to maintain the highest safety standards, means that our arsenal requires frequent testing to ensure that our weapons operate reliably.

To maintain the U.S. nuclear stockpile, the Clinton administration has developed a program that relies on computer simulations in lieu of actual nuclear tests. This program faces enormous technical challenges, as confirmed by the director of Sandia National Laboratories who testified last year that, “Another hundred-to-thousand-fold increase in capability from hardware and software combined will be required” to adequately maintain our nuclear arsenal.

The United States should be careful to resist the tendency in peacetime to adopt feel-good measures such as a ban on the first use of nuclear weapons and the Comprehensive Test Ban Treaty. Both would jeopardize our security as naively as did the Kellog-Briand Pact outlawing war. As long as the United States retains a sound, credible arsenal of nuclear weapons, the Saddam Husseins of the world will have to think twice before unleashing weapons of mass destruction against the United States or our allies. This is the value of nuclear deterrence.

SENATOR JON KYL (R-AZ)

Republican of Arizona

Senate Select Committee on Intelligence


School politics are local

I read with great interest Richard F. Elmore’s”The Politics of Education Reform” (Issues, Fall 1997). I have served for over 20 years in the U.S. House of Representatives and have been involved in the development of legislation ranging from the Improving America’s Schools Act to the Carl D. Perkins Vocational Education Act to the Higher Education Act. So although Elmore has observed the politics of education reform, I have experienced firsthand the endless arguments over what is best for our nation’s children.

I have also spent over half of my life in public education. I have been a teacher, counselor, principal, superintendent, school board president, and most important, a parent. From that experience, I can state that education has been, and always will be, a local issue. The notion of local control of schools is not “largely inaccurate and outmoded,” as Elmore states. In fact, the real reforms of education are occurring at the local level and at best are only remotely influenced by the debate at the national level.

President Clinton has proposed national tests of reading and math. This endeavor would cost the United States $100 million annually. That money could hire thousands of teachers to relieve the overcrowding of classrooms or provide computers for inner-city schools or training for teachers. The federal government already spends over $500 million annually on tests. We don’t need another test to tell us that the same students are not achieving. What we need is to help those students achieve by putting more money into the classroom.

Elmore correctly states that “policy talk is influential in shaping public perceptions of the quality of schooling and what should be done about it . . . [but] policy talk hardly ever influences the deep-seated and enduring structures and practices of schooling.” I could not agree more with this statement. I have long said that any reforms that we make in Washington are worthless unless state and local people are committed to those reforms. We can open a door but we cannot make someone walk through the doorway.

Reforms in education are happening daily across this country at the local level. Reform has already occurred in the Bronx at the Young Adult Learning Academy (YALA), where out-of-school students banded together and demanded services. “They wanted an avenue of opportunity” notes the director of YALA. The program requires an 80% attendance rate from each student, and 90% of the students who complete the program go on to further training, school, or a full-time job. This was not a federal government reform but a local reform initiated by students and teachers.

In closing, let me reiterate that we should not be modest in our demands on schools, as Elmore suggests. We should be bold. As parents we should demand that all students are given an avenue of opportunity. But these demands should be made at the local not the federal level. It is the arrogance of Washington bureaucrats who think they know what is best for the students of Gettysburg, Atlanta, or Austin that has put us in this situation in the first place. In reality, it is the parent and teacher working together who know what is best.

REP. WILLIAM F. GOODLING

Republican of Pennsylvania

Chairman

Committee on Education and the Workforce


The futures of the university

Everyone believes that in the next decades we will continue to witness substantial changes in the structure and practices of higher education. It is, however, difficult to forecast with confidence the detailed nature of these changes, since actual developments will depend on a host of factors, both external and internal to the university, as higher education searches for the most effective ways to meet its responsibilities in a new environment. It is easy to predict, however, that the stimuli for change will have different meanings for different institutions and disciplines, and that some institutions and disciplines will adapt adeptly, while others will not. In this context, “The Global University” (Issues, Fall 1997) by Philip Condit and R. Byron Pipes is a welcome stimulus to our thinking.

However, it seems to me that a richer set of possibilities exists than those implied by a model that has higher education transforming itself to resemble the latest adaptations of U.S. industry. Although there is little question that industry and higher education have a great deal to learn from each other, we both also have a great deal to learn from history. What our history suggests is that although U.S. industry and higher education have transformed themselves a number of times over the past century or so, the most interesting developments in higher education cannot be adequately characterized as attempts to make the nation’s colleges and universities more closely resemble or more exclusively serve industry. The private sector is of course the destination of the majority of higher education’s graduates, and it is essential that the nation’s colleges and universities serve their needs. However, this objective must be accommodated with a host of other serious obligations that the university is, quite appropriately, expected to fulfill. They include the education of students for other sectors; advanced research training; the development and preservation of knowledge; and serving as a constructive critic of existing arrangements in science, in business, and in society at large. There is precedent in our history for locating centers of learning where they can be of direct service; this was an important objective in the creation more than a century ago of the great state universities and land grant institutions with agricultural extension programs that worked directly with farmers to create an agricultural revolution. But one of the greatest strengths of our system of higher education has been its diversity. Although it seems quite plausible to me that the model suggested by Condit and Pipes, or something like it, will find a place in the higher education sector, I doubt that this or any other single model will turn out to be adequate for the full spectrum of responsibilities that higher education will carry. It seems to me that it would be more interesting to speculate on which of the various models that might play an important role in higher education’s future can find expression in a single institution, and which models, or combinations of models, will be the ones to which our best students and teachers will aspire.

As we think of the many valuable lessons we in higher education can learn from the exciting transformations currently taking place in industry and the new needs we all experience for a greater degree of lifelong access to higher education and a larger global perspective, it is well also to remind ourselves that the future still contains a great deal of uncertainty and that higher education is itself an industry that is shaped by a wide range of consumers, clients, and other interested parties, as well as by commitment and aspirations that, at least in some respects, distinguish it from other industries and other sectors of our society.

HAROLD T. SHAPIRO

President

Princeton University


Philip Condit and R. Byron Pipes are right on target in their article about the need to restructure engineering education to better serve the rapidly changing nature of industry. There is little doubt that our colleges and universities must become global in scope. They must provide a continuum of educational services not only for the traditional students but for working professionals as well. And they must develop clearly articulated and acceptable standards for engineering education.

But one might even go beyond engineering education to suggest that the higher education enterprise itself is in the early stages of a major restructuring similar to that experienced by other industries such as health care, telecommunications, and energy. Like other social institutions, our universities must become more focused on those we serve. We must transform ourselves from teacher-centered to learner-centered institutions. Society will demand that we become far more affordable, providing educational opportunities within the resources of all citizens. In an age of knowledge, the need for advanced education and skills will require both a willingness to continue to learn throughout one’s life and a commitment on the part of our institutions to provide opportunities for lifelong learning. The concepts of student and alumnus will merge. Our highly partitioned system of education will blend increasingly into a seamless web in which primary and secondary education; undergraduate, graduate, and professional education; on-the-job training and continuing education; and lifelong enrichment become a continuum.

There also will be major changes in pedagogy as societal needs require us to shift from “just in case” paradigms, in which learning is concentrated in degree programs in the hope it will be of use later; to “just in time” learning, where education and training are provided when and where they are needed; to “just for you” learning, in which customized educational services are provided to meet the particular needs of students.

Although all of this is quite consistent with the models suggested by Condit and Pipes, I also believe that the pervasive educational needs of our society, its people, and its institutions will require entirely new types of learning institutions that are set free from the constraints of space and time by emerging information technology. Already we have seen the emergence of virtual universities, designed to provide educational services to anyone at any time and any place, based on their career needs and lifestyles. For-profit educational providers such as the University of Phoenix are emerging to serve the needs of adult learners. And there are signs of an unbundling of higher education, with the emergence of organizations focusing on limited goals,such as packaging educational content, delivering educational services, or assessing learning outcomes.

Global industries are important clients of engineering education, and so too are individual students, state and federal government, and the host of professions now attracting engineering graduates. Hence, great diversity must continue to characterize engineering education if we are to respond to the needs of an increasingly knowledge-driven global society.

JAMES J. DUDERSTADT

President Emeritus and University Professor of Science and Engineering

University of Michigan

Ann Arbor, Michigan


The vision of the global university put forth by Philip Condit and R. Byron Pipes is an interesting scenario; there certainly are trends at work today that give it a chilling sense of reality. The problem I have with it is that it is essentially a technocratic vision with a strong implied element of superiority and domination of some institutions and cultures by others. A small number of universities, presumably U.S.-based, with close ties to industry, will survive and thrive while those that remain more aloof from the call to serve industry will perish. What about the many other ways in which universities serve society? What about the local educational institutions in countries and cultures around the world? What about the body of knowledge that is of less direct relevance to the industrial bottom line, of which universities have traditionally been the keepers, creators, and passers-on? I for one would rather have an education in which I do not have to wait until I am 60 or 70 years old before I am exposed to “the kind of wide-ranging humanistic knowledge that leads to greater personal development,”which comes last in Condit and Pipes’ educational chronology.

There are exciting possibilities for using information technology in higher education and there is important and growing interaction and cooperation between universities and industry. But there are also separate and distinct roles and identities for these two institutions. Some educational elements of the Condit-Pipes scenario might best be undertaken by companies themselves. Universities owe their longevity and stability to some unique features, the most important of which are intellectual and academic freedom and tenure, as well as a good measure of independence. Current moves to erode these fundamental underpinnings are fraught with peril, not only for universities but for the larger society.

The global university of Condit and Pipes represents a limited approach for bringing a homogenized product to a small, albeit significant, slice of the world. Industry has great potential for continuing to improve the standard of living and the quality of life throughout the world. U.S. universities have made some valuable contributions to improving universities in developing countries. There is a role for some aspects of Condit and Pipes scenario, but the discussion of the mission, shape, and functions of the global university in this day and age needs international inputs as well as a vision that transcends technology and economics.

ROBERT P. MORGAN

Elvera and William Stuckenberg Professor of Technology and Human Affairs

Director, Center for Technology Assessment and Policy

School of Engineering and Applied Science

Washington University in St. Louis

St. Louis, Missouri


Agriculture and environment

Dennis T. Avery’s article raises a number of important development issues. As the administrator of the U.S. Agency for International Development (USAID), I certainly agree that agricultural research is a valuable investment for the United States and that higher agricultural yields will reduce pressure on wildlife habitats. USAID has recently increased its emphasis on agricultural programs, and we are working to build back our capacity to advance research in this field. However, it must be noted that overall funding for U.S. foreign assistance programs has been extremely tight in recent years and that resource allocations-whether to agriculture, basic education, health, or any of the other sectors we work in-all come from a zero sum game. The Clinton administration has worked very hard to keep its support for these different, and mutually reinforcing, aspects of development balanced and integrated despite overall budget cuts.

In the same vein, we believe that a balanced approach is needed with regard to pesticide use and other environmental hazards. Although it is true that eliminating all pesticides is not realistic in the short run, there is excellent research being done that shows that we can both increase yields and cut pesticide use in many cases. The long-run goal should continue to be the elimination or sharp reduction of pesticide use. Future agricultural research must take account of the mounting evidence regarding environmental threats.

With regard to Avery’s comments on free trade, the Clinton administration agrees that free and fair trade is in the best interests of developed and developing nations alike. The administration has worked aggressively to expand free trade and the benefits of international investment. Built into this support for free trade is an understanding that trade policies should be sensitive to the environment and that sound environmental policies ultimately make good economic sense.

I would point out that Avery made what was probably an inadvertent error in his presentation of funding data. Avery stated that USAID previously provided 25 percent of the Consultive Group on International Agricultural Research (CGIAR) budget, which is true. He also said that this level has since been reduced to “about 10 percent of USAID’s budget.” This should have read, “USAID now contributes about 10 percent of the CGIAR budget.”

J. BRIAN ATWOOD

Administrator

U.S. Agency for International Develoment

Washington, D.C.


As an environmentalist and chairman of a large coalition of organizations concerned with agricultural development and global food security, I heartily agree with Dennis T. Avery’s principal point-the urgent need to intensify global agricultural production (“Saving Nature’s Legacy Through Better Farming,” Issues, Fall 1997). I find one major fault in this article: Avery has not addressed the perplexing question of how to intensify agricultural production on the less-well-endowed lands cultivated by the millions of poor farmers in Africa and upland Asia and Latin America. These are the places where the forests, which we both believe must be preserved, are endangered.

I commend Avery for his emphasis on the absolute necessity of intensifying food production globally. In fact, I would have made the case even more strongly: To feed the ballooning population of the world at a minimum caloric level, let alone bring the roughly 800 million people now suffering malnutrition up to a minimum standard, the farmers of the world will have to more than double food production in just a few decades. Moreover, they will have to do this using less good land (too much farmland is becoming less productive because of salinization, waterlogging, and other widespread practices destructive of the soil) and with less water (because of rapidly rising demand from cities and industries for finite water supplies).

I agree with Avery that much of the answer to these challenges will have to come from scientists and agricultural research. But I would emphasize more than he has that scientists must find better ways to work with farmers to develop plants and production systems that will be profitable. He is, of course, very right in pointing to the need for much greater investment in agricultural research. This is especially true for agricultural research aimed at the developing countries still faced with population explosions. I also agree that national self-sufficiency cannot be the answer to the problem of global food security. Most of the huge volume of additional food will have to come from production on the world’s best lands, especially precious irrigated lands, too many of which are endangered by overuse and misuse of water. He’s also right in pointing out that most countries will have to import increasing amounts of food. To pay for it, food-importing countries will have to earn foreign exchange from much greater exports, including exports of food. That is the strongest case for the freer trade in agricultural products that Avery urges.

Where I find Avery’s message incomplete is in his failure to recognize how difficult it will be for poor people living on the less-well-endowed lands in developing countries (a billion or so and stillincreasing) to feed themselves. Subsidizing their production is not the answer. Nor is neglect that will require them to move to already overcrowded cities-or, if they are on or near the tropical forest frontier, to move into and cut down the forests and thereby contribute to the destruction of natural diversity. Poor farmers who cannot afford to buy enough to feed their families and who want to stay on the farm will have to grow most of their own food. Unfortunately, an enormous number of them occupy marginal land that is often in hilly or arid country and is not profitable for the kinds of agricultural production or land consolidation that Avery foresees. For most of them, outside inputs such as fertilizers are too expensive or-in remote areas-not available. Helping as many as possible of these millions of farmers intensify production enough to feed their families and produce some surplus, without destroying the very resources on which their future production must depend, is an enormous human and scientific challenge. Much of the work of the remarkable International Agricultural Research Centers is devoted to just this point.

One problem Avery does not mention is the increasing evidence that a number of synthetic inorganic chemicals, both industrial and agricultural, may turn out to pose an unacceptable threat to the very ability of species to reproduce. The strongest evidence on this point currently comes from animal research, and we must all follow this research closely. In the meantime, it is even more reason to not overuse or inappropriately use agricultural chemicals, a point on which I know Avery heartily agrees.

ROBERT O. BLAKE

Chairman, Committee on Agricultural Sustainability

Washington, D.C.


Dennis T. Avery is clearly correct in his insistence that if farmers are to meet the demands for food arising from population and income growth, most of the increases in production will have to come from increases in crop yield per hectare and from increases in the efficiency of animal feed. This means more intensive production in the more robust soil areas if we are to avoid pushing crop production further on to the more fragile areas. With sufficient yield increases in the robust areas, it may even be possible to reduce crop production on some of the more fragile areas.

Avery is also correct in stating that there is underinvestment in agricultural research in both developed and developing countries. In spite of the promise of biotechnology, it is not as easy to identify the sources of yield increases during the next half century as it was in the 1960s, when the investments in agricultural research, water resource development, and fertilizer production capacity that led to the green revolution were being made.

During the next half century, the world’s farmers will be confronted with a number of surprises. Some will be associated with global climate change, and complacency about the progress of the demographic transition should not make us forget the inaccuracy of past forecasts of population growth. Countries that fail to develop or maintain strong agricultural research capacity will be unable to protect their farmers and consumers from the surprises that will emerge in the future.

VERNON W. RUTTAN

Regents Professor

Department of Applied Economics

University of Minnesota


Dennis T. Avery proffers the important suggestion that if governments increase support for high-yielding crops and advanced farming methods, including the use of fertilizers and pesticides to produce high-yielding crops on existing farmland, it would reduce the conversion of natural areas into farmland as well as produce more food. This factor, while obvious, is often unrecognized by environmentalists and agriculturalists alike.

Avery uses a very broad brush in making his arguments and could have gone into greater depth on specifics of how this might occur from both a biological and political standpoint. He emphasizes underinvestment in agricultural research as a negative factor in saving wild lands through better farming. Conversely, he sees as a positive factor, though underfunded, the impact of the International Rice Research Institute, the U.S. Food and Drug Administration and the U.S. Agency for International Development. He fails to mention, however, the U.S. land grant colleges of agriculture within land grant universities, which have demonstrated capabilities in research, teaching, and outreach and have been enormously effective in improving U.S. agriculture. They need wider duplication throughout the world. I agree that the U.S. Department of Agriculture’s role and programs are also critical models for tackling the problems Avery so ably enunciates.

The land grant colleges of agriculture should consider reorienting their focus toward improving natural resource management and environmental quality. The colleges of agriculture should provide courses in general education in the various universities so that a broader student body is informed about the problems mentioned by Avery and about athe specifics of management of nonagricultural lands.

JAMES H. MEYER

Chancellor Emeritus

University of California, Davis


Dennis T. Avery’s description of higher crop yields sparing land for Nature invites two questions. Has the explosion of crop yields since World War II driven yields against a biological limit? And if higher crop yields spared land from the plow, can faster-growing trees spare forests from the ax?

From the Civil War until World War II, U.S. farmers grew 1.5 to 2 tons per hectare (tons/ha)of corn but now average 8. Although we might compare these 8 tons with laboratory and physiological models, record real-world yields have the advantage of testing plant allocation to grain versus root, stem, and foliage, as well as photosynthetic capacity.

Iowa provides a likely candidate for a yield near the limit. In 1996, the Crop Improvement Association conducted the Iowa Master Corn Growers Contest among 3,225 competitors. Contests are sponsored locally under rules set by the association, which also oversees the checking of yields. In 1996, the winner broke the state record with 19.5 tons/ha. This number was no fluke. The winner, who won six times from 1967 to 1996, fertilized abundantly, inspected the growing crop 24 times, and controlled pests. Except for a late spring, the weather was ideal, and the crop grew without irrigation. The winner grew some three times as many plants on each hectare as his grandfather would have grown. Iowa does not monopolize high yields. The winner among the 3,679 entrants in the 1996 contest conducted by the National Corn Growers Association grew 20.3 tons/ha in Tonopah, Arizona, and an entrant in Sterling, Nebraska, tied Iowa’s record of 19.5.

Clearly, the present U.S. average corn yield of 8 tons/ha and the world average of 4 leave much room for increase through experiments and better management of seeds, spacing, water, fertilizer, and pests. The Iowa winner’s 24 inspections foreshadow precision farming that combines global positioning technology, soil classification and tests, and meticulous yield records to tailor varieties and chemicals to each square meter of a field. The yield of 20 tons/ha also holds out hope that imparting some of corn’s photosynthetic capacity to other species could, for example, lift the world averages of 2.5 tons/ha for wheat and rice.

On U.S. timberland, annual growth currently averages about 1.5 tons/ha. Rates 10 to 20 times faster have been reported for trees as diverse as alder, poplar, eucalyptus, hemlock, and loblolly pine. Strategies as simple as ridges to improve drainage in wet soils speed growth.

High yields are the best friend of habitat. Without ignoring the risks of intensive cultivation, we can lift average yields toward the present limit of 20 tons/ha and lift the limit even more. Allowing for much more urban sprawl, we estimate that through higher yields, U.S. farmers and foresters can meet the demands of more and richer people and still spare for Nature some 90 million hectares of U.S. land that is currently cropped or logged. This area equals 100 Yellowstones or the area of Bolivia or Nigeria.

JESSE H. AUSUBEL

The Rockefeller University

New York, New York

PAUL E. WAGGONER

Connecticut Agricultural Experiment Station

New Haven, Connecticut


Dennis T. Avery carefully employs data, generally from scientific sources, and analytical insights to contribute to an important debate on the role of agriculture and agricultural policy in global society. Whether one agrees with Avery’s philosophy and conclusions or not, his use of readily examinable evidence in support of those positions should be commended. This characteristic alone separates Avery’s work from the majority of public discourse relating to the interface of agriculture and the environment.

The outcome of public discourse and decisionmaking is often determined by the frame in which the problem is cast. In this regard, Avery’s recent writings have made a critical contribution. For years, the need for high-yield agriculture has been positioned as a response to the need to feed a rapidly growing world population. Avery adds a third dimension: natural habitat. His premises are plausible: that populations will exploit natural resources rather than starve, and that exploiting habitat is very detrimental to the natural environment; more detrimental than high-yield agriculture.

Avery’s latter conclusion is certainly worthy of debate, and to some it is probably disagreeable. And even though I agree with it in general, there are numerous issues associated with high-yield agriculture that deserve scrutiny. Society deserves to be informed about the rewards and risks of innovations, whether they are chemical, biotechnological, or organic in nature. But Avery’s framing of the question forces us to address the risk of not using modern agricultural techniques on land that is best suited for intensive agriculture.

Avery is to be commended for advancing a perspective that runs counter to the conventional wisdom in some circles and for continuing to stress the need for decisions based on data rather than intuition and emotion. His arguments force us to address one of our fundamental global challenges-the responsibility to feed a world population that is growing in number and purchasing power, while maintaining the maximum environmental benefits, not just in our own locale but worldwide.

STEVE SONKA

Soybean Industry Chair in Agricultural Strategy

Director, National Soybean Research Laboratory

University of Illinois at Urbana-Champaign


Recent projections of future world food supply and demand show considerable disagreement about supply but amazing consensus on demand-that world food requirements will double within 30 years. Attempts to double food production worldwide by increasing the amount of land under cultivation would result in massive destruction of forests and with them wildlife habitat, biodiversity, and capacity to absorb carbon dioxide. Dennis T. Avery correctly concludes that the only way to avoid these environmental ills is to increase the productivity of lands presently under cultivation. Critical to this effort are incentives and investment in R&D.

Many improved technologies that exist have not been put into use because of inadequate incentives, especially in low-income countries that follow a cheap food policy and underinvest in rural infrastructure. And research is key to increasing the efficiency with which plants and animals convert nutrients and other inputs-including water-into growth; to increase their tolerance to drought, salinity, and cold; to reduce losses of potential productivity from insects, diseases, and competition from weeds and parasites; and to reduce postharvest losses with better storage and processing.

A short-term jump in maize and wheat prices during 1995-96 brought out the doomsayers, much as occurred after the “world food crisis” of 1973-1974. At that time, pessimists broadcast that the Malthusian dilemma was upon us, and grain prices would rise as far as the eye could see into the 21st century. Like Malthus two centuries before them, however, they were wrong, because they assumed technology would remain static. Because of productivity increases brought by technological change, millions of hectares of forests and wildlife habitat have been saved, and the real price of grains (after adjusting for inflation) has trended downward for over 150 years, to the great benefit of consumers.

There is no reason for the doomsayers to be any more correct now. We are in the golden age of biological sciences, electronic sensors, and information processing. These tools of science give us huge opportunities to increase the productivity and efficiency of the food system. But R&D is not free. Investments by the public and private sectors are essential in order to progress from basic scientific principles to practical technologies that can be applied on the farm or in food processing.

Despite this potential, public investment in agricultural R&D has been declining for over 20 years in the United States, the European Union, and the international agricultural research system. Although the pace of private sector investment has significantly increased, it has not offset reductions in public spending.

Unfortunately, environmental activists have labeled agricultural science as part of the problem, not part of the solution. By effectively manipulating the media, they have created hysteria over chemicals, pesticides, hormones, and other inputs of modern agricultural production that is well beyond anything justified by objective evidence.

Some nongovernmental environmental organizations operating in developing countries are promoting agricultural production systems based on a strong environmental ethic, often without scientific foundation. There is a real danger that these efforts will delay agricultural development in some poor countries. In the meantime, massive environmental devastation is likely to occur as forests are sacrificed to expand the land area under cultivation to feed the world’s population.

To create a continuing stream of productivity-enhancing and cost-reducing technologies to meet society’s demands for food safety and environmental quality, we must have more public and private sector investment in R&D, training and education programs for scientists, and well-equipped research institutions.

ROBERT L. THOMPSON

President

Winrock International Institute for Agricultural Development

Morrilton, Arkansas


Whither federal electricity?

Kudos to Richard Munson for his fine piece on America’s “Federal Power Dinosaurs” (Issues, Fall 1997). Munson is right on the money in pointing out that the Tennessee Valley Authority, the Rural Electrification Administration, and the Power Marketing Administration have far outlived their usefulness and need to be privatized.

These public power entities have wasted taxpayer dollars far too long in their less-than-impressive attempts to satisfy parochial interests. Millions of Americans receive absolutely no benefits from these programs, yet continue to fund their operation. This does not mean, however, that Americans who are served or subsidized by these programs are necessarily receiving a great deal. For example, although the Tennessee Valley Authority offers its customers fairly low-priced subsidized power, Tennessee consumers could actually find even cheaper power just over the border in Kentucky, where private utilities receive no federal subsidies whatsoever!

Furthermore, as Munson aptly points out, dozens of other countries across the globe have privatized or are considering privatizing their public power sector. It’s about time America gets on the band wagon before it’s too late.

Finally, there is no denying Munson’s conclusion that, “Federal utilities cannot continue to be sacred cows. The status quo is simply too expensive for both taxpayers and ratepayers.” Equally important, however, is the fact that their continued existence spells trouble for the advent of true free market competition in the electricity sector. If public power retains its subsidized advantages as the checkered flag falls to start a new era of cutthroat competition, the proverbial playing field will be most uneven and U.S. consumers and taxpayers will suffer. Congress should take Munson’s sagacious advice, therefore, and deal with public power before public power destroys the competitive electricity market they hope to bring about.

ADAM D. THIERER

Alex C. Walker Fellow in Economic Policy

The Heritage Foundation

Washington, D.C.


Richard Munson’s “Federal Power Dinosaurs” (Issues, Fall 1997) is a political polemic that bears little resemblance to a scientific or scholarly inquiry and thus seems rather out of place in a science and technology journal.

Munson has elsewhere attacked public power (see Public Utilities Fortnightly 135 (July 1, 1997), no. 13: 24) and I have responded (see Public Utilities Fortnightly 135 (Sept. 1, 1997),no. 16: 40). I won’t attempt to repeat here what was said there.

In his Issues article, Munson alleges that “federal power subsidies . . . distort the market, discourage efficiency, waste taxpayer dollars, and pit regions against each other.” The subsidy issue boils down to the appropriate interest rate on the federal investment in multipurpose hydroelectric projects constructed by the Bureau of Reclamation and the U.S. Army Corps of Engineers, and involves a policy call regarding the use fixed or variable interest rates. Munson would impose a variable rate, adjusted at least yearly. Federal policy provides for a fixed rate that is generally set at the date of construction or operation of the facilities in question.

Munson offers no evidence of market distortion. Given the fact that this Power Marketing Administration (PMA) power is marketed in more than 30 states through about 1,200 electric utilities, it is hard to image how it might distort the market. In fact, a persuasive case can be made that federal power, both Tennessee Valley Authority (TVA) power and that marketed by the PMAs, imposes a market discipline by establishing a cost-based price benchmark against which market-based pricing in the future can be compared.

Nor is there any evidence that the sale of federal power discourages conservation. This is a valuable low-cost resource that is available in limited quantities. (Its low cost is not due to subsidies but to the fact that it is renewable and that the projects were constructed decades ago when costs of labor, material, and capital were lower than today.) Because the amount of power is finite, recipients have an incentive to conserve, not squander it. This is particularly true in a more competitive environment.

And there is no waste of taxpayer dollars. TVA’s power program is self-financing. The PMA customers are repaying the federal investment, plus interest. If Munson wants to root out wasted federal dollars, he can find many examples in his own region. The waste of billions of dollars on the “Big Dig” interstate highway/tunnel project in downtown Boston and the forgiveness of any repayment obligation for federal investments in the St. Lawrence Seaway are two that spring to mind.

Finally, it seems that it is Munson who is primarily responsible for trying to pit regions against each other, specifically the Northeast and Midwest against the South, West, and Pacific Northwest. Although the Northeast and Midwest have not benefited from federal investment in hydropower projects, they have benefited from billions of federal dollars invested in regional infrastructure for such things as navigation, harbor construction, and water purification projects. Unlike the sale of federal power, the beneficiaries of these investments are generally not repaying the Treasury for its investment, let alone the interest.

Munson’s crusade to tear down programs that have worked well and have provided national as well as regional benefits for all consumers is indeed curious. It is difficult to understand why this is an issue of such critical significance to Munson’s Northeast-Midwest Institute. Perhaps the institute would be better served by focusing on positive progressive projects that enhance the quality of life or further promote needed infrastructure investments in its region of the country.

ALAN H. RICHARDSON

Executive Director

American Public Power Association

Washington, D.C.


I am disappointed that Richard Munson breaks no new ground in the debate over the future of the electric utility industry. The anticonsumer policies advocated by Munson and the Northeast-Midwest Coalition would serve only to help utility companies build bigger empires. As executive director of the Northeast-Midwest Coalition, Munson could make a more productive contribution to the debate by working to reduce power rates in the states he represents, instead of attacking states with lower power rates.

Most of Munson’s article advocates old ideas that have been rejected by Congress and are no longer relevant. I had hoped that he would take a more open and honest look at the electric utility industry and in particular the side of the industry he represents. His attacks on federal hydropower programs fail to mention that private power companies operate a vast network of hydroelectric facilities on the public waterways. These companies own one-third of all the hydroelectric capacity in the country, and their power costs are lower than those of the federal hydropower systems because they pay nothing for maintenance of the waterways they use and nothing for the water they use to run these plants. If these facilities were coal-fired generators, this would amount to making the taxpayers pay the cost of the coal used to generate the electricity they buy.

Instead of criticizing federal programs that work for the average consumer, the debate over how the electric utility industry should change must include a careful examination of who will benefit. We must also determine what guarantees should be put in place to protect small consumers from the transfer of lower-cost power to industrial and commercial customers that are more lucrative for power companies. The federal power programs are an important check on that system that protects consumers. So far, no reasonable alternative has been offered.

In his criticism of consumer-owned, not-for-profit electric cooperatives, Munson fails to acknowledge that investor-owned utilities receive a subsidy that is nearly twice the subsidy that electric cooperatives receive on a per-consumer basis. The investor-owned utilities that Munson represents get their subsidies in the form of retained taxes that they collect from their customers but do not pay to the federal government. During that time, the money is essentially an interest-free loan from the taxpayers to these huge corporations. The annual cost of that subsidy is more than $5 billion, and to date these companies have run up a tab of more than $74 billion. That is money they owe the taxpayers.

Munson also fails to acknowledge that with the 1993 reforms creating the Rural Utilities Service, the electric cooperative loan program has cut its costs dramatically. Funding is down nearly 80 percent since 1993. Electric coops helped develop those reforms. Today, only 25 percent of all the financing obtained by electric cooperatives comes from the government.

Electric cooperatives and other consumer groups are committed to being actively involved in the effort to change our industry and intend to fight for all our consumers, no matter how big or small.

GLENN ENGLISH

Chief Executive Officer

National Rural Electric Cooperative Association

Washington, D.C.


Richard Munson’s thoughtful and well-documented article raises issues that should be of concern to all Americans. Each era brings new national problems, and each Congress will hopefully find solutions appropriate to the circumstances of the time. But if each new solution creates a new entitlement and a concomitant bureaucracy in perpetuity, America is doomed to the most peculiar of futures: Although we have solved the problem, we will pay for the solution’s infrastructure forever.

My family and I benefited enormously in 1946 when “the lights came over the hill” on electric wire strung to our cabin. No more drudgery of washing out kerosene chimneys in water hauled by hand and heated on the kerosene stove or of similar chores that could now be done by modern electrical appliances. We were grateful!

But for 30 years there hasn’t been a place in the country that couldn’t get inexpensive electricity. The poles are there, and the wires, and federal power has long since been supplemented by coop-owned power plants and interstate transmission lines. Any generator could use that grid to deliver power; the federal government role is no longer needed. Besides, there are no longer large numbers of dirt-poor farmers in the countryside. There are a just a few farmers left, but there are also thousands of luxury seasonal and year-round homes in the country and suburbs. They are now the primary beneficiaries of the government subsidies Munson describes.

Perhaps your readers, who presumably prefer logic over emotion, can be instrumental in reminding Congress that a religious remembrance of the rural past does not justify taxing Americans today to support a government program to bring electricity to families who’ve had it for generations.

KATHERINE ERIKSSON SASSEVILLE

Fergus Falls, Minnesota


Nourishing a growing world

Ross M. Welch, Gerald F. Combs Jr., and John M. Duxbury deserve compliments for the clarity with which they have analyzed the nutritional problems relating to protein-calorie mal- and undernutrition and deficiencies of micronutrients such as iron and iodine (“Toward a `Greener’ Revolution,” Issues, Fall 1997). However, I am not in full agreement with all the remedies they advocate. Diversification of diets and cropping systems to include pulses, vegetables, and fruits is a feasiblesolution, but improvement of the micronutrient content of grains through breeding is an uncertain route. It has been a general experience that when the nutrient composition of grains and plant parts is altered, new problems of pests and diseases may arise. A surer and better way is the widening of the food basket by including crops described aptly in the publications of the U.S. National Research Council as “lost,” many of which are rich in iron, calcium, micronutrients, and vitamins. Unfortunately, such nutritious grains are called “coarse grains” by the Food and Agriculture Organization of the UN, the U.S. Department of Agriculture, and other official agencies. Such an inappropriate and unfortunate epithet should be immediately changed. Millets and similar grains rich in micronutrients should be classified in the market as “nutritious grains.” This will help both to prevent them from getting “lost” and to provide needed micronutrients in the diet.

The demand for processed and semiprocessed food is growing quickly in developing and industrialized countries. Food technologists should incorporate nutritious grains in such processed foods to provide needed micronutrients. Such a step will help on the one hand to overcome hidden hunger caused by micronutrient deficiencies and on the other to foster through market demand the on-farm conservation of the fast-vanishing minor millets and legumes by tribal and rural families.

“Green revolution” is a term that symbolizes increased production through the productivity pathway. This is the only pathway available to land-hungry but population-rich countries that can keep food production above the rate of population growth.It would be useful to adopt the term “ever-green revolution” to indicate the need for environmentally sustainable advances in productivity, rather than to use terms such as “greener revolution,” “double green revolution,” and so on.

M. S. SWAMINATHAN

UNESCO Chair in Ecotechnology

Madras, India


The article by Ross M. Welch, Gerald F. Combs Jr., and John M. Duxbury makes an important contribution to the agricultural development debate by stressing the importance of nutritional goals in setting priorities for agricultural research. Enhanced food production is a means to an end-improved human welfare, including good nutrition-not an end in itself. The article is also effective in pointing out the seriousness of the micronutrient deficiencies in the human diet and opportunities for alleviating them through agricultural research.

Given the importance of these messages, it is sad that the article is cluttered by statements and conclusions that demonstrate a lack of understanding of the basic relationships between agricultural development and human nutrition. Agricultural development that results in higher incomes for malnourished people and lower costs of producing the food they produce or consume is almost certain to improve nutrition. For example, the Green Revolution facilitated a 30 percent reduction in the cost of producing rice and wheat. The associated savings were shared between producers and consumers. Poor households, which typically spend more than half their incomes on food, experienced a relatively large improvement in their real incomes. A significant share of these new incomes was spent on other food, including foods with larger amounts of micronutrients such as iron and vitamin A. It is simply incorrect to argue that such a strategy “does nothing to improve micronutrient nutrition.” The Green Revolution did more for human nutrition in developing countries (if one includes energy and protein deficiencies along with micronutrient deficiencies as part of malnutrition) than any other single development project.

Contrary to the claims made by the authors, the positive impact of the Green Revolution on micronutrient nutrition has also been very large, in spite of the limited research done on pulses. One must examine the changes in the total diet brought about by the increase in incomes and changes in relative prices, not just the nutrient content of the commodities being researched. Failure to do so results in erroneous conclusions such as “Although it [the Green Revolution] helped increase the production of staple foods, it did so at the expense of overall nutritional adequacy” and “The great paradox of the Green Revolution is that even though fewer people are starved of calories, billions of people remain starved of micronutrients.” The first of these conclusions is simply wrong. Without the Green Revolution, the predictions of the late 1950s and early 1960s that widespread hunger would kill many millions of Asians and leave many more in severe malnutrition might have materialized. True, the strategy used did not solve all nutrition problems, but, contrary to the authors’ claim, it was nevertheless an overwhelming success.

The challenges before us are to help the African countries achieve the same success and alleviate remaining micronutrient deficiencies. Effective collaboration among agricultural researchers, nutritionists, social scientists, and policymakers is essential. So is a solid understanding of the relationships between agriculture and nutrition. Let’s proceed on the merits of the case, not tear down past successes.

Per Pinstrup-Andersen

Director General

International Food Policy Research Institute

Washington, D.C.


Ross M. Welch, Gerald F. Combs Jr., and John M. Duxbury have identified a set of very important issues that arise when one attempts to make simple decisions in a complex system. In solving one problem-caloric malnutrition-we have created a global tragedy of micronutrient malnutrition that will be with us for decades at best. Their article should be on the reading list of anyone considering a career in food or agriculture. The activities of their group and similarones, such as the Canadian Alliance for Food Systems and Health ), are an excellent starting place.

What the authors have not explored in any depth, however, is the range of other effects-in terms of the social and economic well-being of rural communities and the ecological integrity of ecosystems-that will enable sustainable development to occur.And although they have identified many components of a way through these difficulties, they seem to end up with a new shopping list of research and pedagogical activities rather than a coherent, convincing, alternative strategy.

Agricultural activities may be seen to occur in a holarchic context that cannot be fully understood from one perspective. For instance, in recent years, several thousand U.S. and Canadian citizens have suffered from the parasite cyclospora in fresh raspberries from Guatemala; this is a result of U.S. government policies to “aid” poor peasants in the 1980s, coupled with an obsession with personal health in the U.S. (translated into increased consumption of fresh fruit and vegetables), demands for cheap food, and trade liberalization in the hemisphere. Similar stories could be told with regard to major environmental devastation associated with crash programs to increase swine production, the global epidemic of obesity associated with improved economic conditions, or the occurrence of “Mad Cow” disease in the United Kingdom.

In response to these kinds of situations, many of us are involved in R&D projects that integrate a range of perspectives (environmental issues, socioeconomic issues, health and disease), and a range of scales (farm, village, ecological region, country) in methods that combine local participation with the best science we can muster. Researchers at the International Center for Tropical Research, for instance, are not only breeding new varieties of plants but, with the University of Guelph (funded through the Canadian International Development Agency), are developing holistic socioecological approaches to research and management related to sustainable development. We are doing similar work in Kenya, funded by the International Development Research Centre.

The kinds of solutions we are working toward cut across disciplinary boundaries and government departments and often require new methodological tools and theoretical developments that draw on everything from complexity and chaos theories to what has been called “post-normal” (participatory, interactive, and democratic) science. The complex socioecological and public health problems we are facing at the end of the 20th century represent a whole new ball game, where paradox is central and solutions will be complex, tentative, and context-specific.

DAVID WALTNER-TOEWS

Department of Population Medicine

University of Guelph

Ontario, Canada


“Toward a `Greener’ Revolution” offers a plan to improve micronutrient consumption around the world. This would not only decrease diseases caused by nutritional deficiency in much of the world but would reduce the incidence in developed countries of chronic disease in which diet plays a role. These include coronary heart disease, cancer, stroke, diabetes, osteoporosis, and neural tube defects.

As the authors discuss, the Green Revolution focused on producing high-yielding cereal crops to increase calories and protein in the food supply. Americans now consume an excess of calories and protein. I agree that it is time to make it a national priority to release seed varieties for commercial use that are not only high-yielding but also high in micronutrients. Increased consumption of legumes in particular would greatly increase micronutrient consumption. Americans eat too much corn in the form of fat-fried chips; too much wheat in the form of refined flour in cookies and cakes; and too few whole-grain products, legumes, fruits, and vegetables. However, it would require a huge effort to convince food service operations to serve whole-grain breads, three or four vegetables with every meal, and legume dishes nearly as often as cereal-based dishes; and to convince the customer to eat these improved diets.

Selection of micronutrient-rich crop varieties and production of designer crops with increased levels of health protectants might be a more realistic way to improve health than changing consumers’ habits. As we gain an understanding of the role of various constituents in plants, such as trace nutrients, antioxidants, phytochemicals, and fibers, in reducing the risk of disease and promoting health, we can begin the task of applying modern techniques in genetic engineering to produce designer crops for a healthier world.

CONNIE M. WEAVER

Chair, Department of Foods and Nutrition

Purdue University

West Lafayette, IN


As Ross M. Welch, Gerald F. Combs Jr., and John M. Duxbury point out, efforts to solve the global problem of insufficient calories in the diet have led to a weakening of the foundation of balanced diet that is needed for human vitality and well-being. It is unfortunate that meeting people’s macronutrient needs has compromised the satisfaction of their micronutrient requirements.

It should be self-evident that the task of agriculture is not just to produce more food but to provide the nutrients, both macro and micro, that produce healthy people. We have been content to have farmers and agricultural scientists “do their thing” and then to have any resulting deficiencies dealt with by nutritionists, but this is hardly an optimal solution. The “food systems” approach, which considers all links in the food chain from genetics to digestion, is more promising. However, this does not mean that realizing the potential of such interdisciplinary analysis and action will be easy.

I would be somewhat more circumspect in faulting the Green Revolution than Welch et al. are. The expansion of area devoted to the production of rice, wheat, and maize may have displaced production of fruits, vegetables, and legumes in some places. But if there had not been substantial increases in per-hectare yields of these staples, and if we had had to achieve the necessary levels of staple production with less-productive technologies, we might well have seen reductions in the area used for fruits, vegetables, and legumes in order to meet the demand for staples. Staples invariably take precedence over higher-quality foodstuffs because the need for calories is the most basic nutritional requirement. Rather than pit staples against foods richer in vitamins and minerals, I suggest that we focus on optimizing the production and consumption of nutritionally beneficial foods (which is what a food systems approach is supposed to do).

Much of the presentation by Welch and his associates focuses on supplying the most beneficial combinations of foods. But food supply is probably influenced more by patterns of demand than by scientific advances or even policy measures. When it comes to people’s choices about what they eat, nutritional considerations are pretty weak. Demand for food is shaped more by considerations of cost, taste, convenience, and status than by nutritional value.

However, as more and more research calls people’s attention to the link between what they eat and how healthy they are (or to what dire diseases they are more likely to contract if their dietary patterns are not the safest), nutrition and health considerations are rising in this hierarchy. In particular the middle and upper classes, who can afford to pay more for healthy foods, are becoming more nutrition-conscious. Yet the number of people who really take nutritional value seriously in their food purchasing and consumption decisions is still relatively small. Getting the agricultural industry to focus on such value in decisions about what to research, grow, procure, produce, and distribute will in the short to medium run depend more on patterns of demand than on what is socially desirable. So long as we function within market economies, profit will be a more powerful influence than virtue.

This means that much more effort should be devoted to communicating the health benefits of good nutrition, focusing more on its effects on mortality than vitality. This is part of a food systems approach, addressing demand at the same time as supply and requiring the participation of many disciplines.

The food systems approach is still fairly amorphous, being justified at present mostly by its evident merits as compared with approaches that are divided and partial. The next step is to use it to give some concrete demonstrations of how a more holistic diagnosis of the sources of nutritional deficits can produce more cost-effective and sustainable strategies that improve the health of significant numbers of people, particularly the disadvantaged and at-risk, such as pregnant mothers, children, people living in remote areas, and the poor.

We do not yet have much evidence about how such a comprehensive strategy would work. My own view is that a food systems approach can be made operational and useful, but it will require more creative and systematic thinking and more active experimentation than have been invested in any strategies for nutritional improvement thus far. The payoff would be that once established, this approach should be less costly and more self-sustaining.

NORMAN UPHOFF

Cornell International Institute for Food, Agriculture and Development

Ithaca, New York


Viewing Earth from space

For 25 years, I have sought public policies to ensure that the information pouring down from civilian and military remote-sensing satellites is put to greater use in understanding and solving some of the most important problems here on Earth. Therefore, it was gratifying to read “A Jeffersonian Vision for Mapping the World,” by William B. Wood (Issues, Fall 1997).

Remote-sensing policy was once of concern only to the U.S. government, but since the 1972 launch of LANDSAT 1 we have seen the arrival of new and more capable satellites from Europe, Russia, and India. The next great leap in capability will come as new commercial satellites join the constellation. The Subcommittee on Space and Aeronautics of the House Science Committee held two hearings on the growing commercial presence in orbit, on May 21 and June 4, 1997. These hearings led to the introduction of H.R. 1702, the Commercial Space Act of 1997, by Representatives Sensenbrenner, Rohrabacher, Cramer, Jackson Lee, and myself. One of the chief purposes of the bill is to continue the commercialization of remote sensing in the United States.

Of greater import for Wood’s proposal is the bill’s direction to the secretary of state to allow “[a]ppropriate United States Government agencies . . . to provide to developing nations, as a component of international aid, resources for purchasing remote sensing data, training, and analysis from commercial providers.” I would hope that by so doing, other nations would see at first hand the value of Wood’s proposed Global Spatial Data Infrastructure (GSDI) for managing the resources within their borders.

Wood makes the important point that the data sets underpinning the GSDI will require georeferencing if they are to be accurately integrated in geographic information systems. Unlike Lewis and Clark, modern cartographers have the Global Positioning System (GPS), one of our most successful dual-use technologies. As the committee notes in H.R. 1702, GPS “has become an essential element in civil, scientific, and military space development,” and the committee calls on the president to “ensure the operation of the Global Positioning System on a continuous worldwide basis free of direct user fees; and enter into international agreements that promote cooperation with foreign governments and international organizations . . . and eliminate any foreign barriers to applications of the Global Positioning System world-wide.” If Wood’s GSDI is to become a reality, this will be a critical prerequisite.

Despite the real and practical benefits offered by Wood’s proposal, I foresee its implementation as anything but a simple process. We have already seen that some nations remain sensitive to free access to remote-sensing information. There is, of course, the further problem of obtaining resources for the long-term maintenance of the data sets, something we here in the United States have yet to adequately ensure. As the benefits of integrating earth science information become clear through the efforts described in Wood’s article, we should be able to overcome these problems in time.

REP. GEORGE E. BROWN, JR.

Democrat of California

Ranking Democratic Member

Committee on Science


A new environmentalism

Marian R. Chertow and Daniel C. Esty’s “Environmental Policy: The Next Generation” (Issues, Fall 1997) summarizes a number of prescriptions for dealing with the challenges confronting the next generation of environmental policy, challenges also raised in their outstanding book Thinking Ecologically: The Next Generation of Environmental Policy (Yale University Press, 1997). They call for an integrated system of environmental protection, not one fragmented into air, water, and waste segmenta; better risk analysis and management; a less confrontational regulatory style; more inclusive environmental policy that engages local governments, the private sector, and the service industry in ways not hitherto attempted; and more innovation in environmental policy.

Critics might argue that Chertow and Esty are attempting to repair an essentially outmoded and undesirable system of pollution control, and that what is required is not mere tinkering with a fundamentally flawed system but a reassessment of our priorities, leading to the building of a new set of environmental policies and laws.

We appreciate the thrust of such criticism, but a wholesale demolition of the current formidable structure of law, policy, bureaucracy, dependent professionals, interest groups, lobbyists, and lawyers is not politically feasible. Instead, a set of perspectives that supplement and augment Chertow and Esty’s suggestions might move environmental policy in a more practicable direction.

First, consider human demands. Any new generation of environmental policy and law must comprehend and systemically reflect the fact that, to a very great extent, it is consumer or public demand for goods and services, not industrial perfidy, that causes environmental degradation. Significant environmental improvements can result from reshaping consumer preferences and consumption. For example, a reduced demand for pure white as opposed to off-white paper can significantly reduce water pollution by curtailing the environmentally damaging bleaching process. However, changing behavioral patterns in a democracy requires the acceptance of clearly understood policies that are supported by a majority of the public. Those policies should lead to autonomous choice and individual action, avoiding the “command and control” of current environmental laws. When it comes to individual behavior and personal preferences, Americans can be led but they cannot be driven.

Second, industry should be treated as a partner, not an adversary, in environmental protection. In responding to human wants, industry generates the flow of materials and energy in cycles beginning with the extraction of raw materials from the environment. After those materials are transformed, the wastes created are returned to the environment. Industry must be co-opted into the crucial task of reducing environmental burdens by improving this cycle. Chertow and Esty suggest that we should establish, in agriculture, a pollution tax so that farmers pay for their pollution but are also rewarded for constructive environmental actions. Such incentives should be available to all U.S. businesses.

The third point relates to technology. “Ecology,” according to its Greek roots, is the study of houses and thus entails a study of the house of humans: the physical and social environment created by technology that coexists with nature. Technology is surpassed only by religion as the most powerful social force in the saga of human society. Our technological society has been built on our understanding of fire, metals, fossil fuels, atoms, and the human genome and must now include understanding of pollution and resource exhaustion.

Technological innovation is a primary tool for achieving environmental improvements, and must be recognized as such by law and policy. To unleash the creative forces of technology, engineers should be allowed to engineer. Industry should be released from the shackles of obsolete technology controls and permitted to creatively meet environmental quality standards. As a corollary, environmental protection agencies should be obligated by legislative mandate to promote technological entrepreneurship and research.

An expanded “bubble” concept illustrates how to achieve a broad policy change in the regulation of emissions. Everything that occurs within the bubble is the responsibility of the enterprise. Everything that leaves the bubble becomes the concern of the people of the United States. As with agriculture, a penalty/bonus system could apply to emissions that exceed, meet, or fall below established benchmarks. Ideally, a tax would be related to the potential health effects of particular pollutants.

LAKSHMAN GURUSWAMY

University of Tulsa

Tulsa, Oklahoma


Our environment is an irreplaceable resource we should all work to preserve and protect. There can be no substitute for clean air and water, pure food, and wise land use practices. Surely, all Americans can and should affirm this basic commitment.

Yet our environment is placed at risk when persons of good will exercise poor judgment. The upcoming treaty on global climate change is a case in point. Not only are the scientific data on which the treaty (which the United States plans to sign in Kyoto, Japan, in December) is based inconclusive, but the plan to address emissions would not work. By omitting 134 of the world’s 166 nations from the treaty, the United States is giving a tacit invitation to the developing world to pollute with even greater energy.

The developing nations, such as China, India, Brazil, and Mexico, will, over the next 20 years, begin producing the bulk of the world’s greenhouse gases. And over the next half century, developing nations are expected to contribute 76 percent of total greenhouse gas emissions and up to 85 percent of the projected worldwide increase in carbon dioxide emissions.

By strapping U.S. companies with restrictive new emissions standards, the Kyoto treaty will provide a compelling incentive for U.S. firms to consider relocating abroad, stripping jobs out of the United States and concentrating pollution even more intensely in developing countries.

The U.S. Environmental Protection Agency’s proposed clean air standards are another example of unclear thinking about environmental programs. The standards would ratchet down permissible emissions of fine particulate matter and ozone-depleting gases, ostensibly to protect human health. Yet, like the data on global climate change, the science on which the standards are based is highly questionable. And the impact on the economy would be disastrous. An estimated 800 counties-one-fourth of the national total-would fall into immediate noncompliance with the new regulations, and many would have long-term difficulty in meeting the new standards.

The jobs that could be lost as a result of the Kyoto treaty and the new clean air standards number, by some estimates, over one million. A review of the hard science behind each proposal shows that the bases for the treaty and the standards are severely wanting. Energy taxes could skyrocket as much as 50 cents per gallon of gas. Human health could see little or no improvement. And the status of the environment might even worsen.

The irony is that substantial progress in improving the environment continues to be made in the United States. Although we are now driving more than twice as many miles than we did in 1970 and our gross domestic product has doubled during that time, we’ve cut down on emissions of the six major pollutants and their precursors by 29 percent. Manufacturing continues to develop anti-pollution technologies that offer great promise. And the traditional smokestack industries are fundamentally changing as high-tech procedures replace belching furnaces and soot-drenched skies.

Neglect and excess are the two extremes we must avoid. If we are serious about both necessary economic growth and sustainable environmental quality, we need to work together for the common good of humankind and the material environment in which we live. Industry is working hard to achieve this balance and should not be encumbered with unworkable regulations that only impede, rather than enhance, environmental and economic progress.

JERRY J. JASINOWSKI

President

National Association of Manufacturers

Washington, D.C.


Better relations with Japan

Much of my direct personal experience over the past eight years aligns with George R. Heaton’s observations in “Engaging an Independent Japan” (Issues, Summer 1997). Since 1989, I have worked on behalf of two research-intensive academic medical centers in the United States (the University of California, San Francisco, and the University of Maryland, Baltimore) in promoting each campus’ basic and clinical research capabilities to pharmaceutical, diagnostic, and medical device companies in Japan. In 1993, these efforts led to a five-year $19.8-million collaboration in basic cardiovascular biology between U.C. San Francisco and Daiichi Pharmaceutical Company of Tokyo. Over the years, this collaboration has been highly satisfactory to both parties. I have recently begun a similar effort on behalf of the faculty of the University of Maryland, Baltimore.

I was impressed with Heaton’s observation that “The cooperative approach most likely to result in mutual benefit is small-scale and particular.” In our activities, we have seen a dramatic evolution in the attitudes and behaviors of the R&D planners at major Japanese pharmaceutical companies. In 1990, we frequently experienced tremendous difficulty in breaking through the invariably polite but highly formal manner with which we were received in Osaka and Tokyo. Our objective when we visit Japan is always quite simple and straightforward: We wish to learn from the companies the nature of their needs and priorities so that we can offer campus-based capabilities that might be of value to them. In the case of many companies, and even after many meetings in the early 1990s, we often struggled to get beyond the formalities.

Today, how different are the meetings we have at most life-sciences companies in Japan! With only one exception, during our recent visits to 8 of the 10 largest Japanese drug companies we were given quite detailed outlines of the companies’ R&D objectives and priorities. In several cases, we were encouraged to immediately present areas where the university’s strengths might intersect with a company’s particular needs. The entire attitude of the senior management of these companies, and of their North America-based subsidiaries, is very different from what we found just five or six years ago.

Naturally, we applaud the efforts of government bureaucrats in Tokyo and in Washington to promote more vigorous interaction between the scientific communities, both commercial and academic, of our two countries. However, our experience bears out the most significant of Heaton’s observations: “True technological collaboration . . . is best achieved between individual companies, universities, and people.”

DENNIS J. HARTZELL

Executive Adviser to the President

University of Maryland, Baltimore


Rethinking government support for science

Rarely does one find such a balanced and informative book review as Richard R. Nelson’s of Terence Kealey’s The Economic Laws of Scientific Research. Nelson concedes that Kealey’s main arguments for phasing out all government support of science are defensible. However, his last sentence reads, “Perhaps the blatant extreme of Kealey’s position will serve the useful purpose of focusing the arguments of those who believe in public science . . .” But is that position so extreme? I think not.

I believe that Nelson misreads Kealey in suggesting that without government support, no science would be done that is not “intentionally oriented toward technology.” [To describe this type of science, Deborah Shapley and I coined the term “atelestic” (without purpose) science in our book Lost at the Frontier.] That is not Kealey’s intention. I myself have advocated the “disestablishment,” not stopping, of all such science: don’t shut it off, but get the government out of funding it. To draw an analogy, this nation disestablished religion; it did not destroy it. Indeed, Europeans marvel at the booming religion “business” in the United States, and that boom is supported by private money. What I have been trying to get my fellow scientists to do is open their minds to a radically different source of funding of atelestic science-private wealth. In a recent letter to Science, two directors of private foundations pointed out that enormous amounts of wealth will be inherited by the Baby Boomers. I have also made the case that if the U.S. billionaires whose wealth comes from technology gave half the increase in their wealth in one year to a U.S. culture foundation, we would have an endowment sufficient to provide for atelestic science, arts, and humanities.

One can quite logically support the proposition that public science funding should exclusively be given for research that demonstrably supports the public interest, including the creation and retention of jobs. Such research can be included in what I call “horizontal deer hunting”: at least shoot horizontally in the general direction of the deer. Atelestic basic research (from particle physics to radio astronomy) can be distinguished from such goal-oriented work by the term “vertical deer hunting”: shoot bullets up into the air and maybe one will hit a deer on the way down. Kealey’s position, I believe, and mine, is that private wealth (not business or industry support) should be used for all such vertical deer hunting.

RUSTUM ROY

Evan Pugh Professor of the Solid State

The Pennsylvania State University

University Park, Pennsylvania


Recounting coastal population

Don Hinrichsen estimated that nearly two-thirds of the world’s people make their homes within 150 kilometers of a coastline (“Coasts in Crisis,” Issues, Summer 1996). In a 1990 book, Henrichsen estimated that almost 60 percent of the world’s people live within 100 kilometers of a sea coast. More accurate data from a global digital population map now available at <http://www.ciesin.org/ make it possible to obtain more precise estimates of coastal populations. As of 1994, approximately 37 percent of the world’s population lived within 100 kilometers of a coastline, and 44 percent within 150 kilometers. Although these estimates of coastal population size are considerably smaller than Hinrichsen’s, we agree that very large numbers of people affect and are affected by coastal zones.

JOEL E. COHEN

Rockefeller University

New York, New York

CHRISTOPHER SMALL

Columbia University

New York, New York

Finally-A Real Defense Debate

Blue-ribbon defense commissions are often accused of treading on familiar ground in evaluating the Department of Defense’s (DOD’s) strategy and programs, offering up old wine in new bottles to the point where the Pentagon’s brass can legitimately say, “Been there, done that.” Such is not the case, however, with the new report of the National Defense Panel, an independent group convened by Congress to assess future U.S. defense requirements.

The NDP concluded that “unless we are willing to pursue a new course” from the one outlined in DOD’s Quadrennial Defense Review (QDR), “we are likely to have forces that are ill-suited to protect our security twenty years from now.” Indeed, the NDP report differs, often dramatically, from the QDR on key issues that will influence future U.S. security.

Despite its wide-ranging discussion of future challenges to our security, the QDR determined future U.S. military forces’ effectiveness principally by evaluating how they will fare in a replay of the Gulf War, even though our prospective adversaries have strong incentives to avoid fighting us the way the Iraqis did.

By contrast, the NDP argues that the challenges to our security will change dramatically, and perhaps radically, over the next decade or two. The U.S. military will lose the near-monopolies it enjoyed during the Gulf War in long-range precision strike weapons as well as in the ability to use satellites to plot the movement of forces and to target them. This and the diffusion of ballistic and cruise missile technology will allow future adversaries to hold our forward bases-ports, airfields, and large supply dumps-at risk, invalidating the way in which we traditionally project power into a threatened region. Under such circumstances, how will we move our heavy armored divisions through ports? How will we deploy our short-range tactical air forces to forward air bases? How will we even move our carriers, with their crews of 5,000, through critical choke points such as the Strait of Hormuz?

The QDR not only orients U.S. force capabilities on refighting Desert Storm, it argues that we must be prepared to do so in two regions at the same time. Failing that, it notes ominously if incongruously that “our standing as a global power . . . would be called into question.” What the NDP report calls into question is the two-war standard. To be sure, the report notes that there are threats in the Gulf and Korea that must be considered. However, neither offers a replay of the Gulf War. Iraq is far weaker than it was in 1991, Iran is not attempting to resurrect its version of Saddam Hussein’s Republican Guard tank forces for Desert Storm II, and North Korea is emphasizing the kind of new challenge that we will see more of: large numbers of missiles, combined with chemical and perhaps biological weapons, designed to deny the United States the use of forward bases. Put another way, the Pentagon is spending billions of dollars trying to maintain and modernize a force that will decline in value during the next decade. As the report says, the two-war standard “is fast becoming an inhibitor to reaching the capabilities we will need” to meet the new challenges to regional security.

It is in the debate about how the military will be transformed in the coming years that one finds the “smoking gun” showing the clear differences between the QDR and the NDP report. Much of the QDR is devoted to paying lip service to the need to exploit rapidly advancing technologies that are stimulating a “revolution in military affairs.” But when it comes time to put its money where its mouth is, DOD opts to “transform” the U.S. military by following an “in-kind” replacement modernization strategy dominated by Cold War era “legacy” weapon systems that crowd out investment in innovative equipment that could help our military meet tomorrow’s challenges.

Why, asked the NDP, are we upgrading our heavy equipment such as tanks and artillery when we may not be able to deploy them rapidly or at all, because of the growing risk to forward bases? Why are we committing some $300 billion to replace our short-range tactical air fleet when it is not clear how we can defend their forward bases from large-scale missile attack? Why are we canceling plans to build the semi-stealthy arsenal ship, which at a cost of $500 million and with a crew of only 50, could launch 500 long-range precision-guided munitions at 500 different targets, at the same time that we are scouring the budget for an extra $5 billion to build yet another aircraft carrier, whose high signature and short-legged aircraft mean that its crew of 5,000 will be exposed to ever greater risk in future operations?

The NDP report argues that meeting tomorrow’s challenges will require forces that, among other things, place far greater emphasis on stealth, mobility, and electronic defenses than on physical protection such as armor plating. These forces will rely more heavily on operating in a dispersed fashion and on fighting at extended ranges. For this reason, future forces also will rely more heavily on longer-range aircraft and other systems. With “iron mountains” of supplies and major bases increasingly vulnerable to destruction, forces will not only have to operate in a dispersed manner, but they will require a “distributed” supply network as well.

But how does one determine what kinds of forces will best enable us to meet tomorrow’s challenges? The NDP report mandates a vigorous, long-term series of joint field exercises involving all military services to identify the new military systems that will be needed, the old military systems that are depreciating in value, and the new kinds of operations our forces must master to “solve” the emerging challenges to our security.

Here again the QDR “talks the talk” of exploiting the military revolution and transforming the U.S. military but fails to “walk the walk.” Although the Pentagon voices support for joint exercises, the QDR actually scales them back. Indeed, the General Accounting Office reports that “60 percent of the exercise involved only a single service, and should not be characterized as joint.” Revealingly, a discouraged Congress has cut $76 million from the joint exercise budget.

In providing a vision of very different challenges in our future, the very different kind of military we will need to cope with them, and the need for a true “transformation” strategy, the NDP has provided its congressional creators with what they have been seeking since the Cold War’s end: the opportunity for a real debate over defense priorities. It is an opportunity we should seize.

Future Implausible

I should really like this book. After all, it amply fulfills its subtitle, telling us how science will revolutionize the 21st century. And it does so with bravura and competence. The bravura part is Michio Kaku’s predictions of what science can do over the next century, parsed by time and field. He provides forecasts for three periods–to 2020, from 2020 to 2050, and 2050 to 2100 and beyond–for fields ranging from information technologies to medicine to planetary colonization. The competence springs from his catholic and deep technical understanding (he is a theoretical physicist and the author of two other books, Hyperspace and Beyond Einstein). The result for the reader is a tour d’horizon of the many moving edges of contemporary science and technology, from the human genome project to superstring theory to why Deep Blue beat Kasparov. After reading this book, you can chatter away about neural nets, DNA computers, quantum cryptography, hyperspace, black holes, and much more. You can also cite the bold and unambiguous judgments of an able physicist, who characterizes the standard model for subnuclear structure as “one of the ugliest theories ever proposed.” I think the book is worth reading for that alone.

So why do I say that I should like the book? Because the author is an unabashed enthusiast for science and its possibilities, and that certainly suits me. He clearly is excited by what can be done over the next century–the use of “DNA chips” to test in real time for disease that is otherwise undetectable; custom-growing human organs; creating working, and even useful, machines from individual atoms. (Although his scientist’s training compels him to throw cold water on infeasible notions such as teletransportation of the “Beam Me Up, Scotty” kind from Star Trek.) All of this is delivered in readable, if humorless, prose.

Some will be put off by the author’s masterly certitude in predicting both what and when for scientific and technological advances. Indeed, some will wish that they could be as sure of anything as he is of everything. But what sours my reading is the atavistic “back to the future” flavor of the book, taking us back to a time when the promise of science seemed both vast and unblemished-a time of “electricity too cheap to meter,” a time when canals would be carved with nuclear explosions, a time when the first glimmerings of molecular biology raised hopes that some of our worst scourges could be conquered, a time when a major historian cited the conquest of infectious diseases as perhaps the greatest triumph of the 20th century. This was before Silent Spring, before Chernobyl, before toxic waste dumps and vast wastelands created by the nuclear weapons complex, before AIDS. This was before all of us were chastened by the limits of science in dealing with national pathologies-decaying cities, poor public education, and the polarization of income distribution.

What we have learned of the misuses and limits of science and technology-note that it is misuse and limits of and not by science and technology-has forged a new view of science and technology, not only in what is and is not feasible but, more important, that the choices we make about how to use science and technology matter a great deal. We have become much more sophisticated in understanding the benificent powers of science and technology but also more aware that they are not an unalloyed good. Our chutzpah has been punctured. This is not to say that the achievements and potential of science are not widely respected. Witness the high status accorded scientists in opinion surveys or, more palpably, the enormous investments by the public in the support of research.

Still, we have learned that what is possible is not always desirable and what at first seems desirable can have unwanted consequences. The Internet is really nifty, but pornography or information for would-be stalkers is a click or two away. The author knows that science is not a pure good. He recognizes the downside of DNA profiling, the perils of “designer children,” the loss of privacy in a networked world. But the book is called Visions, and the author is so enthusiastic about the promise of science and technology that the voice of caution is drowned out by the shouts of wonder. Rich in possibilities, the book says little about how to make choices, especially given the fact that public resources are limited. For example, the author writes at length about interplanetary travel, including terraforming of planets such as Mars and Venus to make them more hospitable to Earth’s lifeforms, a project that the author acknowledges to be a “formidable task given their hostile atmospheres.” But the question of whether we could turn our closest planets into agricultural colonies is not simply one of technological determinism. There is first the obvious point that the world has plenty of food, even if at times and with tragic consequences it does a poor job of distributing it; second, in an increasing number of countries, the rate of population growth is going down and in some instances population is declining. Should we then ask some of our best and brightest to spend their careers thinking about terraforming the planets, or are there matters closer to home that might benefit from their attention?

Kaku’s passion for science makes him overreach. He attributes England’s loss of its American colonies to King George’s porphyria-induced madness: “It was, apparently, during one of these episodes of dementia that his prime minister, Lord North, mismanaged his American colonies, thereby triggering the American Revolution and the birth of the United States.” So much for decades of historical research and analysis. Similarly, many reader will think twice when reading that “many scientists believe that by 2020 entire classes of cancers may be curable” or that “Eventually, growing new organs may become as common as heart and kidney transplants today.” Maybe. But no citation is offered for either statement. Indeed, where there are citations, they often tend to be secondary, such as the New York Times or Time or the New York Daily News.

At times the author needs to be more helpful to naive readers in calibrating his visions for the future. Thus, he echoes without caveats a timetable for achieving fusion power, including a commercial plant by 2035. Maybe. But the time until fusion power will be commercially viable is almost a scientific constant. Since about 1950, it seems that fusion is always 5 years away from producing any net energy and 35years away from commercial production. Kaku’s projection is consistent with this 45 years of failed predictions. Further, he writes that “In 1997, the fusion program was dealt a setback by the closing of the Princeton Tokamak due to budget cuts.” This view is certainly not universally shared by the plasma community, some of whom argue that plasma physics, and in time fusion power, will gain from the recent change in policy that emphasizes sponsoring work at many universities to better understand fundamentals in plasma science rather than investing limited funds in a big machine.

In a similar vein, the author offers a cramped view when he argues that writing down the equations for the four fundamental forces of matter on a single sheet of paper means that “amazingly enough, all physical knowledge at a fundamental level can be derived from this one sheet of paper.” Maybe. But we’ve learned in the past several years, thanks in good measure to the computer and the tools it provides to numerically analyze phenomena both complex and nonlinear and then to model them, that what really is interesting and revelatory is not the fundamental principles per se but rather how they interact dynamically; and that is true whether one is trying to explain a thought, the behavior and properties of matter, why the universe has the structure it does, or the origin and evolution of life.

The book is a good read for those seeking a knowledgeable guide to the inchoate frontiers of science and technology. Those who seek critical judgments on how science and technology can be used for the greatest good will not find them here. And those who bring a questioning attitude to predictions made for “the time frames of the future” will be skeptical, agreeing strongly with the author that “it’s always dangerous to make predictions stating that certain things are possible.”

Bolstering Military Strength By Downsizing the Pentagon

Secretary of Defense William Cohen is caught between a rock and a hard place. Critics from both the right and the left contend that the U.S. military cannot continue to execute its missions within its current budget. The Joint Chiefs of Staff say they face a shortfall of as much as $20 billion annually between the current procurement budget and the funding they need to recapitalize the armed forces. Recent Congressional Budget Office (CBO) figures contend that the budget shortfall may actually be even higher, reaching nearly $55 billion per year by 2004.

Two traditional solutions-throwing money at the problem or cutting back on international commitments-face insurmountable political obstacles. Fortunately, a third way exists. The Defense Department can save nearly $30 billion by aggressively reengineering the administrative and support side of the Pentagon by taking such actions such as for instance by privatizing military housing, outsourcing information technology, converting excess military bases to private use, and improving inventory management.

Secretary Cohen’s recently announced Defense Reform Initiative (DRI) takes a few small steps in this direction. But the initiative outlines a process of evolutionary change at a time when a truly revolutionary restructuring is needed. The Defense Department, our last large industrial-age bureaucracy, must undergo a transformation similar to the remarkable shakeup that has streamlined U.S. business during the past quarter century. By increasing its reliance on the private sector and emulating best business practices, the Pentagon can achieve what Cohen calls a “revolution in business affairs” and afford the advanced systems needed to ensure continued military preeminence.

Where the money is

The downsizing of the Department of Defense (DOD) during the past decade has been well publicized. Overall, spending has shrunk nearly 40 percent since 1985. What has drawn less notice is that the department’s oversized support structure has largely resisted change. As a result, the tooth-to-tail ratio-the traditional measure of combat capability to support-has become skewed, shifting from 50:50 during the Cold War to nearly 70:30 today. Spending on the support “tail” now consumes nearly $170 billion per year. Correcting this imbalance is essential if the military is to afford the modern equipment needed to carry out its missions. We should invoke the Willie Sutton principle and go where the money is: buried in the Pentagon infrastructure.

Defense “infrastructure” refers not simply to physical structures but encompasses a huge range of activities. DOD defines infrastructure as those activities that provide support services to mission programs such as combat forces and operate primarily from fixed locations. The DOD accounting system sets forth eight categories of infrastructure: installation support, central training, central medical, central logistics, force management, acquisition infrastructure, central personnel, and central command, control, and communications. Nearly half of all infrastructure spending falls in two of these categories: central logistics and installation support.

Outside of personnel costs, most spending on infrastructure occurs within the Pentagon’s operations and maintenance (O&M) accounts. O&M spending traditionally accounts for roughly one-third of the defense budget. Although it is considered the central source of funds for military readiness, O&M also supports a host of basic activities and functions that are similar to commercial practices. In fact, DOD now enrolls more than 640,000 employees in positions with direct commercial equivalents. If treated as a stand-alone company, this entity would rank No. 3 in total U.S. employment behind Wal-Mart and General Motors. Overall, only 14 percent of the 2.5 million DOD personnel are officially listed in combat positions.

Military experts can debate endlessly which areas of infrastructure spending represent “tooth” or “tail.” Despite these differences of opinion, there is wide consensus that infrastructure costs are growing at an unsustainable rate. According to CBO, total O&M spending in 1996 was 7 percent higher than it was in 1981, despite large cuts in force structure. O&M costs per soldier have been growing at a rate of roughly 3 percent per year and are not expected to decline over the next few years. At the same time, the Pentagon’s middle management layers remain heavy: DOD now supports one supervisor for every nine employees. Commercial firms average one supervisor for every fourteen employees.

These growing costs are no secret to Pentagon planners. Indeed, Secretary Cohen’s Defense Reform Initiative is designed to tackle this problem. This plan calls for four major reforms: 1) downsizing and reorganizing the Office of the Secretary of Defense, 2) authorizing two more rounds of military base closures, 3) reengineering specific business practices related to contracting, and 4) aggressive use of public-private competitions to help reduce costs for DOD’s support activities. Together, these reforms are expected to generate about $6 billion per year in savings, about half of which would come from base closings.

Although the initiative contains many good ideas, a much more aggressive path is needed to reengineer the Pentagon. The use of public-private competitions is the plan’s Achilles’ heel. Under current rules, known as the A-76 process, these competitions take two to four years to complete. First, government agencies must analyze the cost to the government of performing a particular function (say, providing janitorial services for a particular base); then it opens each job to bids from the private sector as well as the military entity involved, evaluates the bids, and awards the contracts. Since any function involving more than 10 employees is subject to competition, there is now a huge backlog of cost analyses awaiting OMB’s attention.

Because public entities do not fully account for their overhead as private competitors do, the playing field between the private and public sectors is rarely level. Moreover, the pressure to reduce costs often ends once an A-76 competition is completed. Thus competitions yield only one-time savings. A Center for Naval Analyses study of more than 2,000 public-private competitions found that although these competitions have historically yielded savings averaging 31 percent, they impose a significant administrative burden, generating additional costs of about 11 percent. Even worse, the A-76 process results in piecemeal reform rather than sweeping reorganization. Competitions have traditionally affected small operations with an average size of 35 employees. A private firm may win a contract to perform janitorial services at one installation, while public employees continue to do this job elsewhere. This prevents firms from capturing economies of scale and perpetuates the Pentagon’s cumbersome and internally inconsistent organizational structure.

Piecemeal reform must be replaced by sweeping and rapid change. In some areas, such as aircraft repair and maintenance, Congress requires public-private competitions. But in many other areas-payroll processing, central logistics, and surplus property disposal-DOD can simply opt to exit, avoiding the protracted process of public-private competitions altogether. We cannot wait four years to see whether outsourcing might work in a few target areas. We know that it works, and we can act on this knowledge today.

Learning from business

Through the use of reengineering, outsourcing, and other management tools, the U.S. private sector has become the envy of the world. This situation rebuts the conventional wisdom of the 1980s, when the rise of Japan, Inc., and a united Europe appeared to sound the death knell for U.S. industrial preeminence. In response to these competitive pressures U.S. businesses undertook a host of major reforms: Shareholders assumed a greater voice in operations, managers deployed new financial tools, and most important, companies began to shed noncore businesses to focus on their “core competencies.”

Until the 1980s, the corporate structure of choice was the pyramid, in which a host of different organizations reported up the chain of command to a single centralized leadership. Conglomerates such as Textron and Westinghouse managed businesses in many unrelated industries and maintained vertically integrated operations within each sector. Some of these firms were well managed, but in most cases, the conglomerate structure led to increased overhead costs, reduced flexibility, and misallocation of capital.

A Defense Science Board study projects potential savings from outsourcing at $30 billion-five times greater than DOD’s new reform initiative.

The business challenge from Japan and elsewhere shook up this sleepy world. U.S. business was forced to change. The thrust of this transformation was simple: If someone else can do a job better and cheaper, let them. Following this precept, the old conglomerates broke up. Outsourcing became the predominant management trend of the 1990s. In the area of information technology alone, corporate America outsources more than $30 billion of work each year. A recent Chief Executive magazine/Andersen Consulting Survey found that more than 90 percent of corporate CEOs expected to be involved in strategic outsourcing relationships by the year 2000.

The structure of choice in today’s business world is a web or network, in which companies rely on partnerships, cross-investments, and strategic alliances to enter new markets. Leading firms such as Microsoft and Cisco Systems no longer create new divisions in distant fields. Instead, they form strategic partnerships or form other relationships, such as Microsoft’s investment in Comcast as part of its strategy for gaining access to the cable market.

Today, the Pentagon is one of the last of the industrial pyramids. Its organizational structure is based on ways of doing business that date back to the early years of the Cold War. At that time, DOD had no choice but to develop in-house expertise or services since private sector suppliers, such as today’s Federal Express or EDS, did not exist. Contracting out these activities can improve services and save tens of millions each year, with zero impact on DOD’s core mission of military readiness.

Take the relatively simple case of payroll processing. Today, the Defense Finance and Accounting Service charges $4.58 simply to process a civilian paycheck; the private sector charges less than $2 for the same service. Or consider the Defense Reutilization and Marketing Service (DRMS), DOD’s excess property disposal agency. Operating 164 sites around the world, DRMS disposes of $24 billion worth of excess military property each year, selling it or donating it to charitable organizations. It has lost money 24 out of the last 25 years, with sales yielding 2 percent of the goods’ original price. By comparison, the General Services Administration generates 6 percent while private sector entities such as airlines often obtain 50 percent of list price for parts. Other federal agencies such as the Customs Service privatize property disposal and actually return money to the Treasury.

DOD has long recognized that this record is abysmal. In fact, the Pentagon’s inspector general has recommended that the department suspend funding for DRMS. In 1993, Vice President Gore’s first Reinventing Government report called for complete outsourcing of DRMS. More than four years and millions of dollars later, no outsourcing has occurred, and DRMS’ official position is that it is “considering privatization alternatives.”

DOD can resolve its budget dilemmas by reaching out to the private sector and emulating the best business practices. This requires a commitment to aggressive outsourcing and privatization. In business terms, the Pentagon must focus on its core business-defense-and outsource activities that are tangential to it. The goal of this effort is to cut the costs of support activity, not eliminate it. The list of candidates for change is long. Nearly every business should be considered. A rule of thumb is to check the local Yellow Pages: If a DOD function can be found there, it should be added to the list.

Health care, payroll processing, information technology services, housing construction, travel services, and utilities all jump out as ideal candidates for outsourcing. These are areas that offer the best prospects for savings and/or improved operations as well as for quick success. In addition they are far removed from combat operations and have relatively little impact on the quality of life of military personnel. These business targets have the added benefit of falling in sectors where competitive private sector firms thrive. Companies such as Federal Express, Computer Sciences Corporation, EDS, American Express, and Wells Fargo are certainly capable of serving the military as efficiently as they serve thousands of civilian customers.

If a DOD function can be found in the Yellow Pages it should be added to the list for outsourcing.

Take health care, for example. Although DOD’s health care cost woes mirror those in U.S. society as a whole, DOD’s problems are compounded by factors unique to the military: lower occupancy rates in military health care facilities and a failure to cut spending in accordance with force structure reductions. In addition, when compared with civilian managed care organizations, DOD’s medical programs do little to discourage high use of services. Major savings could be generated if military health care utilization rates could be brought in line with comparable civilian rates. At the same time, many military retirees continue to use the military health care system, even though nearly 50 percent of retiree families, before they become eligible for Medicare, are covered by private insurance from employers.

Overall savings from privatization and outsourcing will be significant. According to the Defense Science Board, an aggressive campaign of business-based reforms could save nearly $30 billion per year (see chart). The biggest savings would come in logistics ($9.3 billion), base closings ($6 billion), and medical care ($4 billion). To bolster this estimate, studies of state and local governments and foreign governments (particularly Britain, where defense privatization is much more advanced) indicate that savings ranging from 15 to 50 percent are the norm when government work is outsourced. Achieving significant savings, however, requires success in the whole range of DOD’s commercial functions.

But cost savings should not be the only goal of outsourcing. Indeed, savings should always be secondary to the larger goal of providing better service. For instance, thanks to aggressive reengineering and outsourcing of Pentagon travel services, DoD employees will move from spending an average of five to seven hours of paperwork for each trip to mere minutes of administrative time under a new streamlined process. In the area of logistics, DOD takes an average of 26 days to deliver in-stock items. Leading commercial firms take 2 to 3 days. Reducing these administrative costs and hassles produces real benefits that do not always appear in an account statement.

If not now, when?

While chairing the Packard Commission on defense management reform in 1986, David Packard made an insightful point: “We all know what changes need to occur. The real question is why don’t we do it?” Packard’s query remains as relevant today as it did in the days of $600 toilet seats.

The causes of inaction come down to two factors: jobs and inertia. Cutting infrastructure costs means cutting jobs in someone’s congressional district. Thus, obtaining support on Capitol Hill for needed reforms has been a difficult task. In the mid-1980s, for instance, then-Defense Secretary Frank Carlucci argued that DOD did not need to rely on in-house security guards, since many private companies offered similar services at competitive rates. Indeed, most government agencies already contracted out this function. When Carlucci attempted to outsource this work, he was blocked by legislation that permanently barred DOD from contracting out guard and fire services. Nearly a decade later, this law remains on the books and the work remains in-house, supporting more than 22,000 government employees.

The effort to close additional military bases has followed the same path. The Senate soundly defeated this year’s request to authorize new rounds of the Defense Base Closure and Realignment Commission (BRAC), a process expected to save a minimum of $1.7 billion per year. Opponents of base closures cited a host of reasons for their votes, but their opposition was quite simply based on parochial politics. Nearly every opponent faced the prospect of a major base closure in his or her district. In fact, majority leader Trent Lott personally circulated a list of at-risk bases on the Senate floor, convincing many senators to oppose the request.

Political pressures are not the only cause of inaction. Many of today’s current problems can simply be attributed to inertia. Changing large organizations is always difficult, and it is especially hard at the world’s largest organization, the Pentagon, where special civil service rules, arcane budget and accounting practices, and unique cultural issues converge to make change difficult. For example, the use of existing federal budget-scoring rules, established to assess the impact of budget measures on the deficit, has had a perverse and unintended effect on privatization efforts. Future savings are not calculated in current procedures. As a result, privatization often appears as a spending increase because of one-time costs related to the sale or outsourcing of an asset. These rules force privatization advocates to jump two hurdles: They must convince both budget hawks and those committed to business as usual that privatization makes sense.

This inertia is reinforced by the fact that DOD’s top leaders-both military and civilian-are trained to deal with “teeth,” not “tail.” When faced with budget cutbacks, they know how to trim fighter wings or reorganize divisions but are poorly equipped to reengineer business functions such as payroll services, health care, and accounting. Moreover, since short tenures are common among top Pentagon officials, opponents within the bureaucracy often block reform through simple delay tactics. DRMS’s ability to block outsourcing for more than four years, despite the support of the vice president, top DOD leadership, and the business community, offers a compelling case in point.

Nonetheless, today the prospects for change appear more favorable than ever. Budget pressures are the primary driving factor. For nearly a decade, the U.S. military has operated under a “procurement holiday” where new weapons buys were delayed and the military survived on the backlog from heavy procurement spending during the Reagan buildup. This strategy made sense in the aftermath of the Cold War, but investments to recapitalize the force cannot be postponed for much longer. Since weapons systems now take anywhere from 16 to 20 years to complete development, today’s procurement decisions will shape the military of 2015. If present trends continue, the military of 2015 will be operating with very antiquated equipment. Under current budget plans, the average age of bombers will reach 35 years by 2010, and heavy attack helicopters will approach nearly 25 years in age. We are reaching the point where our troops will be younger than the systems they operate. These aging systems will require replacements or, at a minimum, significant new investments in repair and maintenance. At the same time, big-ticket procurement items such as F-22 and F/A-18 aircraft will consume larger portions of the total procurement budget.

The financial pressure to support current military operations, replace obsolescent systems, and buy new systems, combined with the positive lessons drawn from U.S. industry, creates a powerful force for change. We must convince Congress that inaction threatens military readiness and convey that sense of urgency to Pentagon middle managers. In addition, we must persuade businesses to support widespread reform-not just because it benefits their bottom lines, but also because it benefits our national security.

Congress must be convinced that inaction on outsourcing and privatization threatens military readiness.

Addressing congressional concerns. Outsourcing contracts can and should include protections for employees. The laudable goal of protecting federal employees should not prevent us from our more important objective of creating an affordable and effective Pentagon. Congress recognizes that behind every outsourcing proposal is someone’s job. Reduced personnel costs create a good portion of the savings from outsourcing. We must expect that aggressive outsourcing will lead to job loss. In the private sector, job losses on the order of 10 to 15 percent generally accompany major reorganizations, due to streamlining, economies of scale, and management consolidation.

The Pentagon has limited experience with outsourcing entire operations. In its few such experiences to date, DOD has required that contractors provide special first-hire privileges to former government personnel. For example, at the newly privatized Naval Air Warfare Center in Indianapolis, most of the former government employees have been hired by Hughes, the new operator of the facility. Indeed, Hughes has even retained the government employees union that formerly represented these workers. Similarly, when military bases closed, the federal government provided significant levels of funding for base reuse, helping to trigger many local economic development success stories. Federal support for reuse helped cushion the economic impact of base closures and assisted the community in attracting new economic activity to the base. A similar transition effort will be needed to offset the effects of outsourcing so that DOD can overcome congressional and local political opposition and lay the foundation for long-term change.

Promoting management buy-in. Creating a constituency for reform requires rewards for those who reform. Past reform initiatives have foundered because line officials at DOD had little incentive to change when savings from reform were simply returned to the Treasury and future budgets reduced to reflect past savings. We should allow units to keep some of the savings from outsourcing and reengineering. For example, if a base commander saves funds by using contractors to maintain base facilities, a portion of those savings should be reserved to meet other needs at the base. More broadly, DOD needs to develop a budgetary “lockbox” that retains savings for pressing internal needs and avoids the current practice of using savings to pay for unexpected contingency operations.

Involving business. The private sector must also be convinced that outsourcing and privatization make sense. If DOD begins to do business in new ways, what will that mean for business? For one thing, it probably means creating new, long-term relationships between the public and private sectors. Relationships between buyer and seller in the private sector are based on long-term partnerships. This concept is anathema in Washington’s world of one-year budget cycles. If privatization is to succeed, government must also enter into long-term relationships with preferred suppliers. Long-term contracts allow suppliers to better amortize the costs of initial investments; if they are based on performance, rather than price, they will ultimately reduce costs for the taxpayer.

Utilities offer a useful example. Many military bases suffer from decaying and inefficient utility services, which private firms are willing to supply. Since upgrading utilities requires a significant up-front investment, private investors are reluctant to enter into short-term contracts. But if the military offered a 10- to 15- year contract for utility services, private firms would jump at this business and even pay to upgrade facilities. DOD would benefit not only from cheaper heat and electricity, it would also avoid significant new investment (now estimated to reach $20 billion) in utility upgrades.

Contracting personnel also need training to deal with this brave new world. At present, DOD is well organized to buy things, but it does not buy services very well. Yet service contractors are becoming more important to DOD. For example, Computer Sciences Corporation is now the thirteenth largest U.S. defense contractor, and many traditional defense contractors, such as Lockheed Martin, are aggressive competitors in the services business. Making effective decisions in purchasing services requires improved training about performance measures, contract design, and the like. Today, it is common to find DOD outsourcing contracts that attract no bidders because the contracts are structured in a manner that forces firms to assume all the risks without guarantees of being able to generate profits. Fairer, more flexible contracts will be needed to make outsourcing work.

Although the benefits of applying best business practices to DOD are clear, it is equally clear that the Pentagon is not a business. Unlike Fortune 500 companies, the Pentagon cannot simply decree a major downsizing; political support must be sought and won. Moreover, because taxpayer dollars remain at issue, a degree of public oversight must remain in place. Nonetheless, DOD can adapt the lessons taught by corporate America to achieve a real “revolution in business affairs.”

Putting a Price Tag on Nature

The contributors to Nature’s Services, who include many of the nation’s leading natural scientists, have taken on the enormous tasks of, first, characterizing the ways in which Earth’s natural ecosystems confer benefits on humanity and, second, making a preliminary assessment of their value. They do a fine job of accomplishing the first but make a mishmash of the second.

Most of the individual contributions to Nature’s Services catalog and describe the services provided by some part of the natural world, how mankind relies on that subsystem or service, and what the human population is doing to degrade or threaten the very service on which it relies. The picture that emerges is one of a marvelously complex, interwoven, and perhaps fragile biogeochemical dance that supports life.

Some of the chapters focus on the overarching services provided by natural systems. For example, in their review of the ecosystem services supplied by soil, Gretchen Daily, Pamela Matson, and Peter Vitousek start by describing the complex process of soil formation and the importance of soil in retaining nutrients and providing physical support for plants. Gary Paul Nabhan and Steven Buchmann draw a complex portrait of the pollination process and make clear the potential consequences of the decline of key pollinators such as wild and captive honeybees. Other chapters examine the range of services provided by major biomes, including marine and freshwater ecosystems and forests. And some chapters give case studies. For example, Andrew Wilcox and John Harte focus on a specific place-Gunnison County, Colorado-and the ecosystem services on which the county relies.

A variety of preliminary assessments of the values of ecosystem services are made. For example, Daily, Matson, and Vitousek estimate the value of key soil functions to crops at $850,000 per hectare, or the cost of modern hydroponic systems in the United States. They also estimate the total value of natural nitrogen fertilization on all land at $320 billion per year, which is based on the cost of artificially supplying nitrogen fertilizer for all land plants after subtracting the amount supplied anthropogenically.

Flawed analysis

Unfortunately, the book encounters serious difficulties as it moves from description to analysis. Although the chapter by Larry Gould and Donald Kennedy does a fine job in laying out the economic principles that could help establish a meaningful basis for the valuation of natural services, most of the other authors write as if they hadn’t read it.

One of the most confusing aspects of the book is the absence of a baseline for the analysis. Daily’s introductory chapter challenges the reader to imagine a colonization of the Moon and all of the natural systems that would have to be transplanted or otherwise imitated. This suggests that the baseline against which comparisons will be made for the “total value” calculations is total loss of ecosystem services. Indeed, the chapter on soil, Norman Myers’ chapter on forests, and the chapter by Sandra Postel and Stephen Carpenter on freshwater ecosystems seem to adopt this approach. Although the premise at least is clear, the result is not particularly interesting: After all, the complete loss of any one of those services/ecosystems could lead to the demise of humanity, with an implicit infinite cost.

In other chapters, the authors apparently prefer a big number to the biggest number, so they each offer estimates that are significant percentages of gross world product but far short of infinite. In effect, they adopt some other baseline against which to compare the value of present services but fail to identify that baseline. In fairness, not all of the authors fall prey to this floating baseline trap. Nabhan and Buchman clearly state that their assessment of the value of pollination services by animals is derived “by comparing the yield (loss) of the crop in the absence of these animals with the yield in the presence of the pollinators.”

A second key problem is that the book largely fails to focus on the crucial issue of marginal rather than total costs, despite Daily’s admonition in her introductory chapter. “As a whole ecosystem services have infinite use value because human life could not be sustained without them,” she writes. “The evaluation of the tradeoffs currently facing society, however, requires estimating the marginal value of ecosystem services (the value yielded by an additional unit of service, all else held constant) to determine the costs of losing-or benefits of preserving-a given amount or quality of service.”

Why, then, is this message often ignored throughout the balance of the book? One can imagine at least two reasons. First, the siren call of large numbers is too powerful to ignore, especially to those who may have a political agenda. The estimate by Osvaldo Sala and José Paruela that conversion of lightly grazed pastureland into cropland causes release of carbon dioxide potentially valued at $200 per hectare simply does not have the pyrotechnic power of Rosamond Naylor and Paul Ehrlich’s claim that in the absence of natural pest control services the entire market value of crops, $1.4 trillion, would be lost.

Marginal value also gets short shrift because the authors simply do not seem to understand the concept. For example, after providing estimates of the total potential value of natural pest control, Naylor and Ehrlich state that “[c]alculating the marginal cost is virtually impossible, however, due to the difficulty in identifying a baseline and measuring a unit change in the natural pest control service.” But this reasonable statement begs the question of how to measure total value without first identifying a baseline and measuring marginal value. The authors then point out that in the case of the brown planthopper in Indonesia, reducing pesticide use and reestablishing natural pest control led to more than $1 billion in benefits. “Based on the magnitude of this result,” they conclude, “one can only project that replacing pesticides with natural pest controls on a global scale would lead to marginal benefits in the tens of billions of dollars annually.” This statement is nonsense; the units simply do not make sense. Marginal benefits are expressed in “dollars per unit of change.” A crude measure of marginal benefits associated with pest management might be dollars saved per one-ton decrease in pesticide use. Clearly, the attempt to scale up from a specific case to estimate global marginal benefits is meaningless.

Mistaking costs for values

In many cases, the authors also mistake the cost of replacing a service or avoiding its loss for the value of the service itself. For certain applications this is appropriate, particularly when the services are essential. However, it is also possible that if certain natural services were lost, they would not be fully replaced. For example, the fact that it might cost $250 billion annually to offset net biosphere carbon emissions does not mean that it is worth doing that, as assumed by Susan Alexander, Stephen Schneider, and Kalen Lagerquist. And the fact that it might cost $320 billion per year to artificially fertilize all land plants if all nitrogen cycling services were suddenly lost, does not imply that we would choose to do so, as implied by Daily, Matson, and Vitousek.

A similar mistake made throughout the book is the confusion of the economic impact of an activity with the social value of the opportunity for that activity. For example, Postel and Carpenter state that the “economic output” of freshwater fishing in 1991 in the United States was approximately $46 billion, an estimate based on total spending on equipment, travel, and intermediate services. But $46 billion is the cost of taking advantage of the fishing opportunity, not the value of the opportunity itself. Conceptually, the latter figure would be the difference between the total value individuals derive from fishing, less the cost of availing themselves of the opportunity. Hence, the higher the expenditure on the sport, the lower the actual net value derived. The $46 billion figure is actually a transfer payment that compensates service providers for the cost of providing the service. In a perfectly competitive economy, it can be used as an estimate of the cost of supplying the service, but it is not an indicator of the value of the ecosystem service. Similar mistakes are made by Postel and Carpenter with respect to freshwater transportation services and by Charles Peterson and Jane Lubchenco on employment loss related to overfishing of marine ecosystems.

Throughout the book, I had the uneasy feeling that some serious double counting was used. For example, the chapters on pollination, pest control, soil, and water services all seemed to claim that the loss of that service would lead to a complete loss of agricultural production, which is undoubtedly true. However, agricultural production can be lost only once, so to the extent that the chapters rely on total value calculations to accomplish their impact, there is the danger that, taken together, they overreach.

The difficulty here is that there is no integrating framework that allows for what economists call “general equilibrium” effects. This is the idea that a change in one part of the economy (ecosystem) can have direct effects that are easily observable as well as equally important indirect effects in other sectors of the economy (ecosystem). The other side of the general equilibrium coin is that when multiple changes occur simultaneously, a given benefit can be lost only once. Daily recognizes this problem in her concluding chapter, where, to her credit, she resists what must have been a powerful temptation to simply sum up the values estimated in the various chapters to arrive at an estimate of the total value of all ecosystem services..

The potential impact of the book was also dulled by a number of minor but easily avoided problems. For example, in estimating the value of carbon accumulation in grassland soils over 50 years, Sala and Paruelo fail to discount future benefits. In discussing the costs of water rights, Postel and Carpenter do not clearly distinguish annual from one-time benefits of water flows. Daily, Matson, and Vitousek badly misuse the term “existence value,” first introduced in the book by Goulder and Kennedy.

None of the mistakes in Nature’s Services are so egregious as to undo the significant good that the book accomplishes. However, they do suggest that a social scientist, particularly an economist, was not integrally involved in coordinating and editing the volume. This is a problem for a project that claims to be based on the work of a “broad, interdisciplinary group of natural and social scientists.”

Refocusing U.S. Math and Science Education

The Third International Mathematics and Science Study (TIMSS) is the most ambitious cross-national educational research study ever conducted, comparing over half a million students’ scores in mathematics and science across 5 continents and 41 countries. TIMSS is far more than the “academic Olympics” that so many international comparative assessments have been in the past. It included a multiyear research and development project that built on previous experience to develop measures of the processes of education. Classroom observations, teacher interviews, and many qualitative and quantitative information-gathering strategies played a part in this development effort. The result was a set of innovative surveys and analyses that attempted to account for the varying roles of different components of educational systems and to measure how children are given opportunities to learn mathematics and science.

The situation regarding what children learn in the United States is disheartening. We are not at all positioned to reach the high expectations set for our nation by the president and our state governors. We are not likely to be “first in the world” by the end of this century in either science or mathematics.

In the fourth grade, our schoolchildren performed quite well on the paper-and-pencil test in science; they were outperformed by only one country and were above the international average in mathematics. Yet the eighth-grade U.S. students fell substantially behind their international peers. These students performed below the international average in mathematics and just above the average in the written science achievement tests.

The better performance of U.S. fourth-graders than eighth-graders is not cause for celebration. It suggests that our children do not start out behind those of other nations in mathematics and science achievement, but somewhere in the middle grades they fall behind. These results point out that U.S. education in the middle grades is particularly troubled-the promise of our fourth-grade children (particularly in science) is dashed against the undemanding curriculum of the nation’s middle schools.

TIMSS points to aspects of our school systems that bear close reexamination. In the past, many critics have attempted to place the blame for U.S. schoolchildren’s poor performance on cross-national achievement tests on a variety of factors external to schooling. However, early analyses of TIMSS data suggest that schooling itself is largely responsible.

What you teach is what you get

How has this come to pass? What features of the processes of schooling appear to be related to the overall mediocre performance of U.S. schoolchildren, and how are these processes related to the deterioration of achievement levels in the years between grades 4 and 8?

Findings from this study are still being released, and TIMSS researchers the world over continue to work on reporting and analysis. Thus, much of what has currently been published merely scratches the surface of the vast interrelated information sources available in TIMSS. Yet preliminary results have been remarkably consistent in the message they send about the role of U.S. curriculum and instruction in fostering mediocre achievement.

“Curriculum” is a word with many commonly accepted meanings. In this article, we understand curriculum to be made up of at least three interrelated levels. The “intended curriculum” is what our schools, school districts, states, and national organizations have set as goals for instruction in each of our school systems. This aspect of the curriculum is examined in TIMSS through its study of textbooks, curriculum guides and programs of study, and surveys of educational authorities. The “implemented curriculum” is the pursuit of goals in the classroom-the array of activities through which students and teachers engage in the process of learning. In TIMSS, this aspect of the curriculum is studied through videotapes and surveys of teachers’ instructional practices, beliefs about education and the subjects they teach, and other features of the opportunities they give students to learn mathematics and science. Finally, the “attained curriculum” is the knowledge, skills, and attitudes that individual students acquire and are able to use. This final aspect of the curriculum is measured in TIMSS through paper-and-pencil and practical achievement tests as well as surveys.

What do all our measures of the curriculum tell us about U.S. schooling as compared to schooling in other countries, especially those whose students significantly outperformed our schoolchildren on the TIMSS achievement tests? The findings point to elements common to most high-achieving countries that are not shared by the United States. These findings make up what appear to be a set of conditions for the realization of higher standards of mathematics and science achievement for larger numbers of schoolchildren. The fact that these conditions are shared by most high-achieving TIMSS countries suggests that they are necessary conditions. The fact that they are sometimes shared by countries that did not outperform the United States warns us that they are not sufficient in themselves to guarantee higher achievement. These findings suggest a number of important lessons that challenge common practice in the United States. However, we cannot merely emulate the practices of other countries. We must reconsider our own practices in the light of this new knowledge and then apply it to generate new alternatives for our own context.

An unfocused curriculum

One striking feature of U.S. textbook and curriculum guides as compared to those of other countries is the magnitude of the differences. Our textbooks are much larger and heavier than those of all other TIMSS countries. Fourth-grade schoolchildren in the United States use mathematics and science textbooks that contain an average of 530 and 397 pages, respectively. Compare this to the international average length of mathematics and science textbooks intended for children of this age of 170 and 125 pages, respectively.

Also striking is how our textbooks differ from most others in the number of topics they cover. Figure 1 shows that U.S. textbooks cover far more topics in grades 4 and 8 than do those of 75 percent of the nations participating in TIMSS. The number of topics is much smaller in Japanese and German textbooks, for example. Japanese schoolchildren significantly outperformed U.S. schoolchildren in TIMSS; German schoolchildren did not.

Does it matter that our textbooks are so comprehensive? Preliminary analyses suggest that it does. This is true because breadth of topics is presented in these textbooks at the expense of depth of coverage. Consequently, our textbooks are limited to perfunctory treatment of subject matter. The amount of instructional time that teachers are likely to devote to the coverage of each element in this broad list of topics and skills is also severely constrained.

This issue of teachers’ use of textbooks is, of course, vital. Information collected from the national random sample of teachers in TIMSS indicates that the majority appear to be attempting the Herculean task of covering all the material in the textbook. Rarely can this dubious goal be accomplished, but the result is that U.S. teachers cover more topics per grade than is common in most TIMSS countries. The implications this has for what is done with topics in terms of their exploration, close examination, and hence learning, are clear. A curriculum that emphasizes the coverage of long lists of topics instead of the teaching and learning of a more focused set of basic contents, to be explored in depth and mastered, is a curriculum that is apt to result in the squandering of the resources that teachers and children bring to bear on the teaching and learning of these contents. The unfocused curriculum is not a curriculum of high achievement.

The unfocused curriculum of the United States is also a curriculum of very little coherence. Attempting to cover a large number of topics results in textbooks and teaching that are episodic. U.S. textbooks and teachers present items one after the other from a laundry list of topics prescribed by state and local district guides, in a frenzied attempt to cover them all before the school year runs out. This is done with little or no regard for establishing the relationship between various topics or themes on the list. The loss of these relationships between ideas encourages children to regard these disciplines as no more than disjointed notions that they are unable to conceive of as belonging to a disciplinary whole.

The challenge is to create sound renovated educational systems that flood the light of reform into every corner of our nation.

The TIMSS videotape study of grade 8 mathematics lessons in the United States, Japan, and Germany further illustrates the episodic nature of the implemented curriculum in this country. Mathematicians from U.S. universities were asked to examine transcripts of mathematics lessons from Germany, Japan, and the United States (all indications of the country in which the lessons were taking place were removed from the transcripts). These mathematicians rated each lesson according to the overall quality of the mathematical content presented in them. Coherence of the content-that is, the establishment of clear, disciplinarily valid linkages among the topics and skills in the lesson-was an important part of the rating. Figure 2 shows the ratings and the substantial differences between the three countries studied. It is apparent that U.S. instructional practices mirror the incoherent presentation of mathematics that characterizes our intended curriculum.

A static conception of basics

Public discussion about education in our country rages about the importance of what are known as “basics.” How we define the fundamental content and skills that children need to acquire to be regarded as educated matters more and more as the United States struggles with the formulation of educational policies that are intended to be in place as we enter the new millennium. Participants from all points of the political spectrum and educators representing a broad range of divergent educational approaches and philosophies are engaged in this debate. Information from TIMSS has clear implications for these discussions.

In the United States, it appears that a common implicit definition of basics in education is content and skills that “are so important that they bear repeating-and repeating and repeating.” Arithmetic, for example, is a set of contents and skills that are revisited in U.S. classrooms year after year. Even in grade 8, when most high-achieving TIMSS countries concentrate their curriculum on algebra and geometry, arithmetic is a major part of schooling in this country.

Other nations act as if far more mathematics and science topics are basic. In these countries, basics are so important that when they are introduced the curriculum focuses on them. They are given concentrated attention so that they can be mastered, and children can be prepared to learn a new set of different basics in following grades. Such focused curricula are the motor of a dynamic definition of basics. Among the highest-achieving countries, each new grade sees new basics receiving concentrated attention to prepare students for the mastery of more complicated topics that are yet to come.

TIMSS’ studies of curricula, textbooks, and teacher’s instructional practices show that the common view of educational basics is different in the United States. At grade 4, the definition of basic content in the United States does not differ substantially from that in high-achieving countries. However, in our country, the same elementary topics that form the core content in grade 4 appear repeatedly in higher grades. What new content does enter the curriculum rarely does so with the in-depth examination and large amount of instructional time that characterize other countries. In fact, on average we introduce only one topic with this type of focused instructional attention between fourth and eighth grade in either mathematics or science. Most TIMSS countries introduce 15 topics with intense curricular focus during this period. The highest-achieving TIMSS countries introduce an average of 20 topics in this way.

In the U.S. curriculum guides and textbooks, about 25 percent of the topics covered in the eighth grade are new since the fourth grade. For most TIMSS countries, about 75 percent of the topics are new. This persistence of old topics and lack of instructional focus on topics that are newly introduced at each grade may help explain the drop in U.S. student achievement levels between grades 4 and 8. The persistence of elementary content in middle school suggests that the lauded “spiral curriculum” in the United States is in fact a vicious circle.

We should not simply move upper grade courses to lower grades; the entire process of defining content grade by grade must be involved.

As suggested above, the consequence of lack of focus and coherence and the static approach to defining what is basic is that U.S. curricula are undemanding when compared to those of other countries, especially during the middle grades. Materials intended for our mathematics and science students mention a staggering array of topics, most of which are introduced in the elementary grades. This mention does not include much more than the learning of algorithms and simple facts. Demanding standards would require more sophisticated content, taught in depth as students progress through the grades.

Recently, TIMSS’s discovery that grade 8 curricula in most high-achieving nations largely concentrate on algebra, geometry, and advanced number topics in mathematics and on physics and chemistry in science has led to some proposals that grade 9 algebra courses be given in grade 7 or 8. This is a recent example of a common pitfall of interpretation of findings from comparative studies such as TIMSS-the rush to emulate “successful” countries. However, this approach ignores the findings regarding other aspects of curriculum.

The point is not merely that these contents are taught in the eighth grade. It is also that the curriculum in these countries carefully builds up to the study of these topics. This is accomplished through a process of focused and coherent transitions from simple to increasingly more complex content and skills. Thus, we should not simply move upper-grade courses to lower grades; the entire process of defining content grade by grade must be involved. In addition, the inclusion of more complex content in the middle grades is not the only factor to be considered. High academic standards require students to reason, analyze, and develop the ability to solve problems and understand the processes of science and mathematics. Thus, more ambitious performance expectations for students are necessary as well.

Dispersed control

Many of the lessons above invite important additional questions: How do high standards become embodied in educational policy? What type of authority is attached to curriculum guides, programs of study, textbooks, tests, etc.? The study of TIMSS nations and their contrast with U.S. educational policy is again suggestive of important challenges confronting our educational system

There are many bodies guiding education in the United States. There are close to 16,000 local school districts in public education alone, a variety of intermediate districts, and many other private and public bodies concerned with education. Respect for local control has resulted in state and national standards (mostly proposed by national professional or scientific organizations such as the National Council of Teachers of Mathematics or the National Research Council) that can provide little guidance for implementation, because these standards compete with many others for the attention of school administrators and teachers. Add to this mix a wide array of commercially produced textbooks and standardized tests, each embodying yet another definition of what is basic, and the situation can be depicted as a veritable Tower of Babel.

Standards that transcend local boundaries are common in most TIMSS countries and are present in all countries outperforming the United States. Yet not all countries have national standards in the sense of one set of standards mandated for all students from a central government authority. In Belgium, separate standards apply for Flemish- and French-speaking school systems. In Switzerland each canton, and in Germany each of the länder, defines standards for its school systems. Despite this, most countries have consensus on the question of basics grade by grade. The result is that the disparate voices of various bodies harmonize in a consensual view of the basics, producing a coherent vision to guide their systems.

We must seek policies that foster innovation (and facilitate diffusion of successful innovations) while ensuring high standards for all.

Many TIMSS nations are as concerned with educational equity as the United States is, viewing the education of the elite as no more important than the education of children from households of low social and economic status. These countries mostly have policies that attempt to ensure equity by ensuring a common educational standard, instead of policies that leave standards entirely up to localities. “High standards for all,” instead of high standards for some and lower standards for others, is the policy these countries follow. They favor a consensus on what it means to succeed in school. This stands in marked contrast to the U.S. approach of essentially allowing each locality to define its own standard of success, as if the economic system did not ultimately hold all children to a common standard.

In the United States, state governors and the federal legislative and executive branches have defined national objectives for U.S. education that transcend local boundaries. They have stated that the national goal is to be “first in the world in mathematics and science education” by the end of this century. Accomplishing this national goal in the context of locally defined curricula presents a singular challenge. How can we attempt to increase national average achievement in the current chaotic curricular environment? The answer would appear to be that we cannot.

Many in the educational community fear this lesson of TIMSS the most. Some believe that standards that transcend localities will make local innovations difficult or impossible. Others fear that an approach favoring high standards for all will unfairly hold our nation’s underprivileged schoolchildren up to standards that they cannot hope to reach. Still others worry that our brightest children will be held back by such an approach.

However, standards need not preclude innovation. This is demonstrated in a recent study of innovations conducted in Asia, Europe, and America by the Organization for Economic Cooperation and Development (OECD). Noteworthy innovations were found in countries with national standards and other types of overall standards. In addition, when well defined, a “high standards for all students” approach can help guide policymakers in ensuring access to the resources necessary to help underprivileged schoolchildren meet these standards. In fact, this is a common justification for the “high standards for all” approach in many TIMSS countries.

To rise to the challenges that beset our educational systems, we must seek policies that foster innovation (and facilitate diffusion of successful innovations) while ensuring high standards for all. That this is difficult is certain, but refusing to contend with this issue is likely to ensure mediocre average performance into the 21st century, with inferior achievement being retained as the special patrimony of many of our country’s poorest and most disadvantaged students. A national commitment to high achievement is clearly incompatible with restricted standards. Courage in formulating ambitious educational goals should not be coupled with timidity in addressing the question of ensuring access to the high standards that would make accomplishing these goals possible for the majority of our students.

A “high standards for all” curriculum is not only demanding for students; it places great demands on all the resources of the system. If the United States were to take up the challenge of formulating such standards, many elements of the system would require alteration. Textbooks, standardized tests, and other instructional resources, including time for instruction and its preparation, would need to be reexamined to ensure that they support teachers and students in their new roles as implementers of this curriculum. Our existing systems of education are experienced in the type of instruction an episodic curriculum requires. But new tools will be needed if new types of curriculum are devised.

One of the most important resources of our system is teachers. New focused and demanding goals will require new approaches in the preparation of new teachers and in the support of teachers already in service. A focused and demanding curriculum for teachers will also be required.

Splintered versus integrated reform

It is clear that there are no simple fixes to the challenges facing U.S. education. Reforming our policies and practices is a challenge to the very structure of teaching and learning in our country, involving standards, tests, textbooks, teaching methods, teachers, and other factors.

Changing only a few of these factors is unlikely to affect mean achievement in this country. Isolated attempts at reform are also not likely to be effective in changing national patterns. Because educational systems are involved, integrated systemic strategies, instead of widely dispersed foci of reform, are required. Localized reforms have their place-they engage the creativity and knowledge of our teachers, administrators, and communities. The challenge before us as a nation, however, is not merely to permit the random generation of innovations locality by locality like so many fireflies swarming in the night. The challenge is to create sound renovated educational systems that flood the light of reform into every corner of our nation. Translating innovations into institution-building requires the commitment of educational systems. Until this happens, most of our schoolchildren will be unable to benefit from even the most brilliant local reform efforts.

Perhaps the most significant contribution of TIMSS is in understanding systemic and institutional alternatives. Lessons from TIMSS have challenged and no doubt will continue to challenge our most basic assumptions about schooling and how our educational systems provide access to learning. TIMSS allows us to learn from high-achieving countries as well as other countries and to translate these lessons into new approaches to old problems that take into account our own history, culture, and institutions.

TIMSS is very much a work in progress. Its many interrelated sets of information are still being used to answer a number of questions concerning education in mathematics and science. Already, however, it has taught us important lessons with profound implications for the conduct of schooling in our country. At the U.S. national research center for TIMSS, we are continuing the work that we hope will contribute to understanding these lessons better and to learning new ones. However, we have shown that there is much that the United States can learn from schooling in other countries. We have uncovered a number of challenges for education and educational policies that have clear implications for the achievement of our students in mathematics and science as we reach the 21st century.

Stay the Course on Chemical Weapons Ban

Leave it to Washington to toil for more than two decades to create a new arms control regime that abolishes poison gas and then, once it takes off, to begin foolishly undercutting its own achievement by trying to water down the treaty’s verification provisions. But that is exactly what Congress is trying to do and it must be dissuaded.

On April 29, 1997, a revolution unlike any other in arms control history began. Teams of inspectors began criss-crossing the globe to monitor compliance with the Chemical Weapons Convention (CWC), which bans the development, production, stockpiling, transfer, and use of poison gas. Participating countries are obligated to destroy their chemical arsenals and weapons production facilities under international supervision. In addition, inspectors will routinely check the activities of the chemical industry to ensure that chemicals used in commercial products are not being diverted to produce lethal chemical agents.

Perhaps the most notable achievement of the CWC’s early days is that so many governments embraced a treaty that unambiguously mandates the acceptance of short-notice challenge inspections of any site on their territory suspected of engaging in prohibited activity. To date, more than 100 countries have joined this accord, and more than 60 others have signed but not yet ratified it. The possessors of the world’s two largest chemical weapons stockpiles, Russia and the United States, are CWC members, and the roster of participants includes countries from every corner of the earth-South Africa, Cuba, Brazil, Japan, France, Jordan, and Belarus, to name a few.

Of the roughly two dozen countries considered likely to possess a chemical weapons capability, only North Korea, Syria, Egypt, Iraq, and Libya remain outside the CWC. In May 2000, the CWC’s automatic economic penalties will cut off aspiring proliferators from the marketplace of commercial chemicals that can also have military utility. Whether by making it more difficult for countries to stockpile poison gas or by compelling countries to relinquish their chemical weapons programs, the CWC endeavors to reverse the proliferation trend.

Congress undercuts U.S. interests

The CWC undoubtedly would have been seriously undermined without U.S. participation. At the eleventh hour and after a rancorous debate, the U.S. Senate voted to ratify the CWC on April 24, just five days before it was activated. Even before ratification, the United States had already begun to destroy its stockpile of more than 29,000 metric tons of poison gas. CWC inspectors have initiated continuous monitoring operations at the destruction plants at Johnston Atoll in the Pacific Ocean and at Tooele, Utah. Destruction facilities will be constructed at seven other locations where U.S. chemical weapons are stored. In addition, inspections have been conducted at former U.S. chemical weapons facilities and at the sites involved in the U.S. chemical weapons defense program. The treaty permits research to develop and test protective gear, vaccines, and antidotes, but will closely watch defense programs. Thus, CWC inspectors are monitoring all aspects of the United States’ former chemical weapons program.

Nonetheless, the United States is not in full compliance with the CWC because it has not yet approved its implementing legislation. As a result, the U.S. chemical industry, which supported the CWC’s ratification and has accepted the treaty’s data reporting and inspection burdens, does not have the guidelines to fulfill these obligations. The legislation directs the chemical industry to provide data about certain chemicals that the CWC’s inspectors would then check during routine inspections. Both houses of Congress passed the implementing legislation, but the Senate did not vote on a rider that the House attached just before Congress recessed. Thus, the legislation died.

Perhaps equally disturbing, Congress has tried to tinker with the CWC’s verification provisions to give U.S. facilities a break on the treaty’s stringent monitoring provisions. In the implementing legislation, both the House and the Senate passed language that would allow the president to refuse a challenge inspection on the grounds that it could threaten U.S. security. This language directly contradicts the obligation that the United States undertook when it joined the treaty to accept challenge inspections at any time, at any place on U.S. territory. The Senate also stipulated when ratifying the CWC that no samples collected during a routine or challenge inspection may be taken out of the country for additional analysis. Since the inspectors will carry analytical equipment with them, they will rarely invoke the right to conduct off-site analysis. When they do, however, detailed analysis at laboratories certified by the CWC’s inspectorate in the Hague may be crucial to clarifying whether a country has cheated.

The Pentagon, the intelligence community, and the chemical industry have all agreed to the CWC’s verification measures, but some members of Congress continue to object based on false concerns that the very inspection measures needed to verify compliance abroad will compromise national security or confidential business information at home. What these members fail to appreciate is that the CWC contains ample protections to safeguard such information, which is why the chemical industry, the Pentagon, and the intelligence community gave the CWC their seal of approval. When Congress reconvenes, it may continue trying to create exemptions in the CWC’s verification regime. If Congress does so, then other countries will surely exploit these loopholes. In short, U.S.-made exclusions to the CWC’s verification regime will ultimately backfire on U.S. security interests when other countries deny a U.S. challenge inspection request or thwart inspectors’ efforts to have a sample analyzed off-site. Such an outcome would gut the treaty’s verification protocol.

Evidence of the CWC’s clout

Poison gas has long been so universally abhorred that governments have been loath to admit having stockpiled weapons or built facilities to make chemical agents. Before the CWC went into effect, Russia and the United States were the only two countries to admit possessing chemical weapons, even though intelligence agencies had concluded that about two dozen countries had chemical weapons programs.

Russia, which has declared that it possesses some 40,000 metric tons of chemical weapons, ratified the CWC in November 1997. Strapped for funds to destroy its arsenal, Moscow is banking on the willingness of other countries to help pay for its destruction program. Likely donor countries, however, may withhold significant contributions until the CWC’s inspections settle concerns that in the late 1980s and early 1990s the Soviet, now Russian, chemical weapons complex developed, tested, and produced small quantities of an entirely new generation of deadly nerve agents. Moscow has denied that this activity occurred. Further, Russia wants to exempt from inspection former chemical weapons production facilities that have already been converted to peaceful enterprises. Other countries that are in full compliance with the treaty will insist that Russia divulge all required data and allow unimpeded access to treaty-relevant sites. Only full cooperation with the CWC’s inspectorate will garner continued Western aid for Russia’s chemical weapons destruction program.

Now that the CWC has strengthened the behavioral norm against chemical weapons, more countries are terminating their chemical weapons programs. China declared having former chemical weapons production facilities, which CWC inspectors have already visited and mothballed. India said that it possessed production facilities, along with an arsenal of as-yet-unknown size. Pakistan and Iran, both suspected of harboring chemical weapons programs, joined the treaty in November 1997 and are scheduled to declare what they possess early in 1998. France acknowledged that it had production facilities. In addition, one more nation has reported to the CWC inspectorate that it has a chemical weapons stockpile, but that country has not announced this to the public.

By the end of October, the CWC’s inspectors had completed more than 85 inspections in 20 member states. Among the sites inspected were 34 chemical weapons production facilities, 19 chemical weapons storage facilities, and 23 facilities that produce small quantities of highly toxic chemicals for permitted purposes, such as defensive or medical research. Five chemical weapons destruction facilities are being continuously monitored. When the numbers are tallied, the CWC’s potential to reduce the chemical weapons threat becomes apparent: Within six months of the CWC’s activation, more than 80 facilities involved in chemical weapons-related activities had already received the scrutiny of international inspectors. Although there were high hopes for the CWC, few thought so much would be accomplished so quickly.

If Congress creates loopholes for the United States other countries will surely try to exploit them.

As might be expected with the startup of a system of international legal requirements and a new inspection agency, all of the news is not so encouraging. For example, countries have dallied in providing their assessed contributions to the inspectorate. The funding shortfall was so severe during the summer of 1997, when the United States and Japan were withholding funds, that the inspectorate’s director, Jose Bustani of Brazil, notified participating states that he would soon have to halt inspections. The financial situation has improved somewhat but is still a major concern.

Another problem frustrating the inspectorate has been the failure of participating countries to file declarations. Roughly 30 of the more than 100 member countries have not met the CWC’s initial paperwork requirements, and some of the declarations received were incomplete. To a certain extent, this problem was predictable. Unlike the United States and Russia, the lion’s share of the CWC’s members lack extensive experience in handling declarations or inspections. With time, the responsible authorities in the CWC member states will become more accustomed to the treaty’s requirements, and the track record in this area will improve.

A different type of tug-of-war brewing in the Hague pertains to the CWC’s secrecy rules. Under the treaty, a government can require the inspectorate to protect the confidentiality of all information in its declarations and inspections. Although some details should be held in the tightest secrecy, a certain level of transparency is needed to promote awareness of and confidence in the CWC. Some CWC members are extremely reluctant to release information about treaty-related activities that have reversed long-standing denials about the existence of chemical weapons programs. Bustani has managed to persuade some countries to allow him to divulge broad characterizations about CWC implementation activities. However, more information needs to be publicly presented. Sensitivities about the release of treaty-related data should ease as governments gain confidence that monitoring activities confirm their compliance with the CWC and allow them to be members in good standing of the international community.

If governments can overcome the initial discomfort caused by the managed transparency of the CWC’s intrusive verification provisions, they will grow to appreciate how the treaty can enhance their security. No longer will decisionmakers in one capital question whether a neighboring country is mounting a clandestine chemical weapons program; inspectors will routinely visit high-risk facilities in all participating states, and challenge inspection rights can be exercised to confirm or allay suspicions. This type of cooperative security arrangement is far preferable to the uncertainties that lead to the expense and instability of arms races.

Should the CWC continue on its current successful course and participating states resolve the shortcomings that have hindered the treaty’s implementation thus far, the CWC may well become a strong model for future cooperative security and disarmament arrangements. Diplomats negotiating a verification protocol to strengthen the Biological and Toxin Weapons Convention, which lacks any monitoring provisions, are already considering patterning verification measures for this accord after those contained in the CWC.

Consequently, the United States needs to provide leadership to ensure the full and effective implementation of the CWC, at home as well as abroad. It must not create loopholes in the treaty it labored so long to achieve. The CWC’s model of managed cooperative security is one that clearly serves long-term U.S. security interests and deserves unwavering support from Washington.

Saving Nature’s Legacy Through Better Farming

The obvious environmental problems and solutions are not necessarily obvious at all. Organic farming and the time-proven techniques of traditional agriculture hold great emotional attraction. Pure foods without chemical fertilizers and pesticides seem clearly preferable to the methods of large agribusiness. Could they be the cure for the unrelenting destruction of earth’s forests and its diverse flora and fauna?

Ironically, developed world demands for these “obvious” solutions may push the world into famine and destroy the planet’s biodiversity far faster than chemicals and overpopulation. Only the judicious application of the “evils” of high-yield farming may give us the time to prevent such calamities. Contrary to common wisdom, saving the environment and reducing population growth are likely to come about only if governments significantly increase their support for high-yielding crops and advanced farming methods, including the use of fertilizers and pesticides.

The biggest danger facing the world’s wildlife is neither pesticides nor population growth but the potential loss of its habitat. Conversion of natural areas into farmland is the major impact of humans on the natural environment and poses a great threat to biodiversity. About 90 percent of the known species extinctions have occurred because of habitat loss.

Whereas many industrialized countries see their farms occupying less and less of their land, worldwide the opposite is true. The World Bank reports that cities take only 1.5 percent of earth’s land, but farms occupy 36 percent. As world population climbs toward 8.5 billion in 2040, it will become even more clear how much food needs govern the world’s land use. Unless we bolster our efforts to produce high-yielding crops, we face a plow-down of much of the world’s remaining forests for low-yield crops and livestock.

Greens versus green revolution

For decades and certainly since the 1968 publication of Paul Ehrlich’s The Population Bomb, overpopulation has riven the world’s conscience. Each regional famine catalyzed by crop failures or weather brings it further to the fore. Yet we seem unaware of how crucial the green revolution has been in forestalling famine and simultaneously saving the environment.

By maximizing land use, the green revolution’s high-yield crops and farming techniques have been vital in preserving wildlife. By effectively tripling world crop yields since 1960, they have saved an additional 10 to 12 million square miles of wild lands, according to an analysis that I conducted and which was published in early 1997 in Choices, the magazine of the American Agricultural Economics Association. Without the green revolution, the world would have lost wild land equal to the combined land area of the United States, Europe, and Brazil. Instead, with hybrid seeds and chemical fertilizers and pesticides, today we crop the same 6 million square miles of land that we did in 1960 and feed 80 percent more people a diet that requires more than twice as many grain-equivalent calories.

The green revolution, however, has had its detractors. Since the publication of Rachel Carson’s Silent Spring in 1962, developed-world residents have been bombarded with claims that modern farming kills wildlife, endangers children’s health, and poisons the topsoil. Understandably, we love the natural ways of life. For many centuries, humans seemed to grow their crops quite well without deadly chemicals that poison soil, plants, insects, and animals. The organic gardening and farming movements look fondly on that ideal. Unfortunately, those techniques are ill suited to the modern world for two strong reasons.

First, they worked in a much less populous world. Such techniques and the plants they favor require large amounts of relatively fertile land supporting small numbers of people. In modern Europe, Asia, and the developing world, such low-yield farming is impractical. Second, many of those techniques are incredibly destructive to soil and forests, degrading biodiversity quickly and irrevocably. Slash-and-burn agriculture, the time-honored primitive farming method, is perhaps the most harmful to the environment.

Ironically, in a world facing the biggest surge in food demand it will ever see, many environmentalists who want to preserve natural areas are recommending organic and traditional farming systems that have sharply lower yields than mainstream farms. A recent organic farming “success” at the Rodale Institute achieved grain-equivalent yields from organic farming that were 21 percent lower and required 42 percent more labor. Such yields may be theoretically kinder to the environment, but in practice they would lead us to destroy millions of square miles of additional natural areas.

Meanwhile, Greenpeace and the World Wildlife Fund have gathered millions of European signatures on petitions to ban biotechnology in food production. They do not protest the use of biotechnology in human medicine, but only where it will help preserve nature by increasing farm productivity.

No meat, no thanks

Humans might be able to meet their nutritional needs with less strain on farming resources by eating nuts and tofu instead of meat and milk. So far, however, no society has been willing to do so. For example, a Vegetarian Times poll reported that 7 percent of Americans call themselves vegetarians. Two-thirds of these, however, eat meat regularly; 40 percent eat red meat regularly, and virtually all of them eat dairy products and eggs. Fewer than 500,000 Americans are vegan, foregoing all resource-costly livestock and poultry calories. The vegetarian/vegan percentages are similar in other affluent countries.

The reality is that as the world becomes more affluent, the average person will be eating more meat and consuming more agricultural products. If population growth stopped this hour, we would have to double the world’s farm output to provide the meat, fruit, and cotton today’s 5.9 billion people will demand in 2030 when virtually all will be affluent. There are no plans, nor any funding, for a huge global vegan recruiting campaign. Nor does history offer much hope of one’s success.

Meanwhile, in what used to be the poor countries, the demand for meat, milk, and eggs is already soaring. Chinese meat consumption has risen 10 percent annually in the past six years. India has doubled its milk consumption since 1980, and two-thirds of its Hindus indicate that they will eat meat (though not beef) when they can afford it.

According to the United Nation’s Food and Agricultural Organization (FAO), Asian countries provide about 17 grams of animal protein per capita per day for 3.3 billion people. Europeans and North Americans eat 65 to 78 grams. The Japanese not long ago ate less than 28 grams, but are now nearing 68 grams. By 2030, the world will need to be able to provide 55 grams of animal protein per person for four billion Asians, or they will destroy their own tropical forests to produce it themselves. It will not be possible to stave off disaster for biologically rich areas unless we continue to raise farm yields.

To make room for low-yield farming, we burn and plow tropical forests and drive wild species from their ecological niches. Indonesia is clearing millions of acres of tropical forest for low-quality cattle pastures and to grow low-yielding corn and soybeans on highly erodable soils to feed chickens. Similarly, a World Bank study reports that forests throughout the tropics are losing up to one-half of their species because bush-fallow periods (when farm lands are allowed to return to natural states) are shortened to feed higher populations.

Pessimists have said since the late 1960s that we won’t be able to continue increasing yields. However, world grain yields have risen by nearly 50 percent in the meantime. If we’d taken the pessimists’ advice to scrap agricultural research when they first offered it, the world would already have lost millions of square miles of wildlife habitat that we still have.

Nor is there any objective indication that the world is running out of ways of increasing crop yields and improving farming techniques. For example, world corn yields are continuing to rise as they have since 1960, at about 2.8 percent annually, in what’s rapidly becoming the world’s key crop. The yield trend has become more erratic, mainly because droughts decrease yield more in an eight-ton field than they do in a one-ton field. U.S. corn breeders are now shooting for populations of 50,000 plants per acre, three times the current corn belt planting density, and for 300-bushel yields.

The biggest danger facing the world’s wildlife is neither pesticides nor population growth but the potential loss of habitat.

Also, the International Rice Research Institute in the Philippines is redesigning the rice plant to get 30 percent more yield. Researchers are putting another 10 percent of the plant’s energy into the seed head (supported by fewer but larger stalk shoots). They’re using biotechnology techniques to increase resistance to pests and diseases. The new rice has been genetically engineered to resist the tungro virus–humanity’s first success against a major virus. The U.S. Food and Drug Administration is close to approving pork growth hormone, which will produce hogs with half as much body fat and 28 percent more lean meat, using 25 percent less feed grain per hog. Globally, that would be equal to another 20 to 30 millions tons of corn production per year.

The world has achieved strong productivity gains from virtually all of its investments in agricultural research. The problem is mainly that we haven’t been investing much. One reason for underinvesting is pessimism about much can be gained through research. But if humanity succeeds only in doubling instead of tripling farm output per acre, the effort will still save millions of square miles of land. Besides, the more pessimistic we feel about agricultural research, the more eager we should be to raise research investments because there is no doubt that we will need more food.

Saving the soil

Throughout history, soil erosion has been by far the biggest problem with farming sustainability. Modern high-yield farming is changing that situation dramatically. Simple arithmetic tells us that tripling the yields on the best cropland automatically cuts soil erosion per ton of food produced by about two-thirds. It also avoids pushing crops onto steep or fragile acres.

Relatively new methods such as conservation tillage and no-till farming are also making a big difference. Conservation tillage discs crop residues into the top few inches of soil, creating millions of tiny dams against wind and water erosion. In addition to saving topsoil, conservation tillage produces far more earthworms and subsoil bacteria than any plow-based system. No-till farming involves no plowing at all. The soil is never exposed to the elements. The seeds are planted through a cover crop that has been killed by herbicides. The Soil and Water Conservation Society says that use of these systems can cut soil erosion per acre by 65 to 95 percent.

Organic farmers reject both these systems because they depend on chemical weed killers, not plowing and hoeing, to control weeds. However, these powerful conservation farming systems are already being used on hundreds of millions of acres in the United States, Canada, Australia, Brazil, and Argentina. They have been used successfully in Asia and even tested successfully in Africa.

The model farm of the future will use still-more-powerful seeds, conservation tillage, and integrated pest management along with still-better veterinary medications. It will use global positioning satellites, computers and intensive soil sampling (“precision farming”), to apply exactly the seeds and chemicals for optimum yields, with no leaching of chemicals into streams. Even then, high-yield farming will not offer zero risk to either the environment or to humans. But it will offer near-zero and declining risk, which will be more than offset by huge increases in food security and wild lands saved.

Food security and lower birthrates

Food availability along with modern medicine have lowered the world’s death rates, producing a one-time population growth surge. But they are also helping in the longer term to restabilize population by giving parents confidence that their first two or three children will live to adulthood.

Increased food security, for which crop yields are the best proxy, has been a vital element in sharply reducing world fertility rates. Indeed, according to World Bank and FAO statistics, the countries that have raised their crop yields the fastest have generally brought their births per woman down the fastest. For example, Indonesia has increased its rice yields since 1960 by 250 percent and its births per woman have dropped from 5.5 to 2.9. Likewise, Zimbabwe more than doubled its corn yields with Africa’s best plant-breeding program, while births per woman have dropped from 8 in 1965 to 3.5 today. In contrast, countries without high-yield trends have kept higher fertility rates. In Ethiopia, which has suffered famine instead of rising yields, births per woman have risen from 5.8 in 1965 to more than 7 today.

Environmentalists seem unaware of how crucial the green revolution has been in preventing famine and preserving biodiversity.

Unfortunately, the world is not gearing up its science and technology resources to meet the agricultural and conservation challenge. U.S. funding for agricultural research has declined for decades in real terms, though the cost and complexity of the research projects continue to rise with the size of the challenge. The federal and state governments increased their spending on agricultural research from $1.02 billion in 1978 to $1.65 billion in 1990, a one-third decline in constant dollars. Public funding rose to $1.8 billion in 1996. Likewise, private sector agricultural research spending rose from $1.5 billion in 1978 to $3.15 billion in 1990, a 15 percent real decline.

Overseas, the research funding picture is worse. Europe has never spent heavily on agricultural research. Only a few of the developing world countries, including Brazil, China, and Zimbabwe, have even sporadically spent the few millions of dollars needed to adapt research to their own situations. All told, the entire world’s agricultural research investment is probably less than $15 billion a year.

A telling example of the world’s cavalier attitude toward agricultural research occurred in 1994, when the United States and other donor nations failed to come up with a large part of the budget for the Consultative Group on International Agricultural Research (CGIAR). CGIAR is the key international vehicle for creating high-yielding crops, supporting a network of 16 agricultural research centers in developing countries. Thus, global agricultural research almost literally went bankrupt at the very moment the world was pledging another $17 billion for condoms and contraceptive pills at the UN meeting on population in Cairo. The World Bank subsequently stepped in on a conditional basis to keep the CGIAR research network running.

Historically, the U.S. Agency for International Development (AID) provided about 25 percent of CGIAR research funding, or about $60 million a year. Currently, this has fallen to about $30 million per year in much cheaper dollars, or about 10 percent of AID’s budget. Indeed, despite the centers’ success in raising world crop yields, AID has since shifted its priorities sharply from agricultural research to family planning. Given the sharp downward trends in birthrates in developing countries, additional family planning funds are likely to make only a modest difference in the world’s population. However, Western intellectuals and journalists highly approve of population management.

In sum, world spending on agricultural research is tiny, especially if you consider that in 1996, the U.S. food industry alone produced $782 billion in goods and services and that the federal government subsidizes farmers to the tune of nearly $100 billion a year. (The European Union spends another $150 billion a year on farm sbsidies.) Meanwhile, agricultural research has saved perhaps one billion lives from famine, increased food calories by one-third for four billion people in the developing world, and prevented millions of square miles of often biologically rich land from being plowed down.

We shouldn’t be too surprised at the lack of approval and funding for high-yield agricultural research. Industrialized countries, which have funded most modern farming research, have been surrounded for the past 40 years with highly visible surpluses of grain, meat, and milk. Too many citizens associate the surpluses with science, not with ill-conceived farm price supports and trade barriers.

Western Europe watched its farm population decline from about 28 percent in 1960 to about 5 percent today. This followed an earlier but similar decline in the number of U.S. farmers. Both Europe and the United States associate the decline of the small family farm with the rise in crop yields, not with the rising value of off-farm jobs.

Securing the future

Feeding the world’s people while preserving biologically rich land will require two key things: more agricultural research and freer world trade in farm products. Expanded agricultural research should be the top priority.

Congress should double the federal government’s $1.4 billion annual investment in agricultural research and adopt substantially higher farm yields as one of the nation’s top research priorities. No other nation has the capacity to step into the U.S. research role in time to save the wild lands. Congress should also release much of the cropland still in the U.S. Department of Agriculture’s (USDA) Conservation Reserve Program for farming with conservation tillage, and it should direct AID to make the support of high-yield agriculture at least as important as population management.

In addition, in order to use the world’s best farmland for maximum output, farm trade must be liberalized. Farm subsidies and farm trade barriers, although they are beginning to be reduced, have not only drained hundreds of billions of dollars in scarce capital away from economic growth and job creation, they now represent one of the biggest dangers to preservation of biologically diverse lands. The key dynamic in the farm-trade arena is Asia’s present and growing population density. Without an easy flow of farm products and services, densely populated Asian countries will be tempted to try to rely too much on domestic food production. But it will be extremely difficult to do. By 2030, Asia will have about eight times as many people per acre of cropland as will the Western Hemisphere. It already has the world’s most intensive land use. In reality, countries reduce their food security with self-sufficiency. Droughts and plagues that cut crop yields are regional, not global.

The United States must convince the world that free trade in farm products would benefit all, particularly those in developing countries. President Clinton should make free farm trade a top international priority, which could give momentum to the World Trade Organization’s scheduled 1999 talks on liberalizing trade in agricultural products.

Changes in attitude

Finally, a renewed emphasis on high-yield farming aimed at preserving biodiversity will require a change in mind-set on the part of key actors: environmentalists, farmers, and government regulators in particular. The environmental movement must postpone its long-cherished goal of an agriculture free from man-made chemicals and give up its lingering hope that constraining food production can somehow limit population growth. Until we understand biological processes well enough to get ultrahigh yields from organic farming, environmentalists must join with farmers in seeking a research agenda keyed primarily to rapid gains in farm yields whether they are organic or not.

Farmers must accept that environmental goals are valid and urgent in a world that produces enough food to prevent famine. They must collaborate constructively and helpfully in efforts such as protecting endangered species and improving water quality. Without such reasonable efforts, farmers will not get the public support for high-yield farming systems and liberalized farm trade.

Government regulators at all levels must realize that chemical fertilizers, pesticides, and biotechnology techniques are powerful conservation tools. For example, the Environmental Protection Agency (EPA) must stop regarding a pesticide banned as a victory for the environment. Having dropped the economic rationale protecting some high-yield pesticide uses, EPA should now take into consideration the potential for new pest-control technologies to save wild lands and wild species through higher yields, both nationally and globally.

Education can play a big role in changing the mind-sets of the various actors. For example, the U.S. Department of State, which has already announced an environmental focus for U.S. foreign policy, could work to ensure that the concept of high-yield conservation is appropriately encouraged in international forums. The U.S. Department of Education could collaborate with USDA to help the nation’s students understand the environmental benefits of high farm yields.

On all fronts, this is a time for pragmatism. We know that high-yield farming feeds people, saves land, and fosters biodiversity. We know that agricultural research is the surest path to those same goals. The narrower goals should be subsumed into the larger ones for the short- to mid-term future. A combination of agricultural science and policy can combine for the welfare of the planet, its people, its animals, and its plants. Achieving those crucial aims will mean rethinking population, farming methods, fertilizers, and many related controversial aspects of agriculture.

The Unfinished Work of Arms Control

The world got through the half century since Hiroshima and Nagasaki with no further use of nuclear weapons in conflict and with a degree of restraint in avoiding major war among the great powers that could very well have been due to the cautionary influence exerted by the existence of nuclear weapons. But the nuclear weapons era has entailed considerable costs and dangers-above all the risk that the unimaginable destruction of nuclear war would be unleashed by accident or error or by escalation from a conventional conflict or a crisis. Also, the risk has always been present that the major powers’ prominent reliance on nuclear deterrence and the possible use of nuclear weapons in war fighting would promote nuclear proliferation among more and more countries.

With the Cold War over, the danger of premeditated nuclear war with Russia has practically disappeared, and the conventional military threats once thought to require deterrence with nuclear weapons are likewise much diminished. The United States and Russia have taken advantage of these fundamental changes with a series of major agreements and unilateral initiatives. Under the terms of the first Strategic Arms Reduction Treaty (START I), signed in 1991 and currently being implemented by both countries, the number of strategic nuclear warheads deployed by the two sides will be cut from 13,000 and 11,000, respectively, to about 8,000 each. START II, signed in 1993, would further limit the number of deployed strategic warheads to 3,000 to 3,500 on each side; the United States ratified the treaty in early 1996, but Russia has not yet done so. At the Helsinki summit in March 1997, Presidents Clinton and Yeltsin agreed to seek a START III treaty with a level of 4,000 to 4,500 deployed strategic nuclear warheads. Unilateral initiatives since the early 1990s have also significantly reduced the numbers of deployed nonstrategic warheads, especially on the U.S. side. Nuclear testing has ended and the United States and Russia have agreed not to target their missiles against each other on a day-to-day basis. Perhaps most important, a debate has begun on the proper role and function of nuclear weapons in the long run.

Despite this remarkable progress in reducing the number of nuclear weapons, neither the basic character of U.S. and Russian nuclear forces nor the plans and policies for their use have fundamentally changed from what they were during the Cold War. This leaves us with nuclear postures, and associated costs and risks, out of proportion to the diminished demands on these forces in the post-Cold War world. For example, both the United States and Russia continue to maintain a significant portion of their nuclear forces in a state of alert that would permit them to launch thousands of nuclear warheads in a matter of minutes. These continuous-alert practices exacerbate the risk of erroneous or unauthorized use. As long as one side maintains its forces in a state of high alert, it is politically unrealistic to expect the other side to lower its guard. And Russia recently announced that to offset the weakness of its conventional forces, it is adopting for its nuclear weapons a “first-use-if-necessary” doctrine similar to that of the United States and NATO, thus apparently giving nuclear weapons a more central role in its national security.

Moreover, the size of these arsenals, even after START I and (we hope) START II are implemented, will remain larger than necessary for deterrence. Also, the risk that other countries might obtain nuclear weapons remains serious and requires continuing high-priority attention.

Fundamental change needed

To respond fully to the opportunities to reduce nuclear dangers opened by the end of the Cold War, the United States should adopt a fundamental principle: The role of nuclear weapons should be restricted to deterring or responding to a nuclear attack against the United States and its allies-that is, the United States would not threaten to respond with nuclear weapons to attacks by conventional, chemical, or biological weapons. Limiting nuclear deterrence to its “core function” would permit significant measures to further reduce the risks posed by nuclear weapons, including changes in nuclear operations and improvements in the safety and survivability of nuclear weapons. Adequately sized and properly equipped conventional forces would be essential in providing an effective response to nonnuclear threats. Consonant with this approach, of course, the United States must meet its own security requirements and its commitments to friends and allies. And it must take great care to reassure its allies that those commitments will be kept.

Since the Persian Gulf War, there has been considerable discussion about whether nuclear weapons should be used to deter chemical and biological weapons. It is a serious misnomer to lump the three types of weapons together under the label “weapons of mass destruction.” In reality, these are very different types of weapons in terms of lethality, of certainty of destruction, and of their relative effectiveness against military targets. Chemical and especially biological weapons are serious and growing problems for international security. But nuclear weapons are not the answer to the most likely uses of chemical and biological weapons against the United States or its allies.

Restricting nuclear weapons to the core deterrence function would permit a number of significant changes. First, the United States should make no first use of nuclear weapons its explicit doctrine-and encourage Russia to do the same-rather than continuing to adhere to “first-use-if-necessary” for nuclear weapons. This would allow for much deeper reductions in the U.S. nuclear arsenal. Provided that the remaining nuclear forces are survivable and their command-and-control systems are robust, just a few hundred warheads might satisfactorily fulfill this core deterrent function. Reaching such low levels will obviously have to be accomplished in stages, and very significant improvements in our verification capabilities will be required to ensure that small numbers of nuclear weapons are not hidden away for deleterious purposes. Also, other countries, both declared and undeclared nuclear powers, must be included in a regime of nuclear arms reductions before the United States and Russia could prudently reduce the number of their warheads below 1,000.

Short-term steps

Among the short-term measures to be taken, two seem particularly important to restore momentum toward fulfilling the unfinished agenda of reducing the nuclear danger.

Jump-start START. Serious discussions should begin immediately to outline the details of the proposed START III agreement, rather than waiting for START II to take effect. The current policy of demanding Russian ratification of START II before discussions begin gives the Russian Duma too much leverage over the arms control process and could cause unnecessary delay when (and if) ratification is achieved.

In addition, START III should be negotiated under the counting rules created in START I and II, which count deployed delivery systems and then assess a count of deployed strategic warheads indirectly, in order to enable us to reach early agreement. The difficulty of agreeing on the details of a change to counting total warheads-and actually doing the counting-is more of a burden than the next round of reductions should have to bear. Future agreements beyond START III, however, should encompass all nuclear warheads: strategic and nonstrategic, active and reserve.

Prune the nuclear hedge. The 1994 Nuclear Posture Review, carried out by the Department of Defense, is the basis of current U.S. policy. A key factor in the review’s conclusions was the perceived need to retain U.S. flexibility in case reform in Russia failed. As a result, the United States opted to maintain a “hedge” of additional reserve warheads to provide the ability to reconstitute its nuclear forces if it became necessary. But additional firepower would not improve the practical deterrent effect of U.S. nuclear forces in the event of renewed antagonism with Russia. Moreover, the need to increase its strategic readiness in ways open to intelligence gathering systems-for example, by dispersing bombers or by moving a larger fraction of its ballistic missile submarine force to patrol areas-would provide a genuine hedge against surprise. The United States would only need to increase its nuclear force levels if massive growth in the Russian force imperiled the survivability of the U.S. arsenal; for the foreseeable future Russia has no realistic capability for such reconstitution.

The primary risk posed by the hedge strategy is that it could become a self-fulfilling prophecy: The United States may consider keeping a substantial stock of reserve warheads a matter of prudence, but to Russia it could look very much like an institutionalized capability to break out of the START agreements. To the extent that the United States is concerned about a return to hostile relations with Russia, it should focus on decreasing the probability of such perceptions.

Adopting a no first use doctrine might allow the U.S. to reduce its nuclear stockpile to a few hundred weapons.

Abandoning the hedge would also save several billion dollars a year and ease the burden on the Department of Energy in maintaining the reliability and safety of an oversized nuclear stockpile. In the absence of a compelling security requirement, it makes good budgetary and military sense to reduce the number of warheads.

Two other important short-term measures require serious technical study and analysis to make their implementation possible:

Providing greater operational safety. In parallel with but not directly tied to the START III discussions, the United States should begin seeking measures to provide higher levels of operational safety for nuclear weapons. Technical discussions with the Russians should begin as soon as possible so that any unilateral moves might be readily reciprocated. At present, “dealerting”-measures to extend the time it would take to prepare nuclear weapons for launching-is receiving considerable attention. Serious detailed studies are needed from the military-technical community to provide the basis for implementing this idea. And any agreed reduction in alert status would have to be accompanied by reliable means of assuring compliance, an essential element of which would be a warhead accountability system.

Although it is relatively easy to describe the idea of “dealerting,” achieving it without destabilizing consequences will not be trivial. To the extent that we are concerned about the safety and security of Russian strategic nuclear forces, however, such measures are the most direct remedy. More broadly, ending continuous-alert practices would be a significant step toward reducing the dangers of a hair-trigger posture.

Counting all warheads. At the Helsinki summit in March 1997, Presidents Clinton and Yeltsin agreed to begin exploring how to move toward a regime that uses warheads-all warheads, not just those deployed-rather than delivery vehicles as the unit of account. This is an essential step for deep reductions in nuclear weapons; countries will not agree to cut their arsenals to minimum levels if they cannot be assured that significant stocks of nuclear warheads are not hidden away. It is also a formidable verification challenge, requiring advances in technology considerably beyond what is available today.

But no verification system could provide complete assurance that no clandestine stocks remained. Therefore, as nuclear reductions proceed to lower levels, the issue of how much uncertainty is acceptable becomes increasingly important. This, in turn, places a greater burden on the international security system to provide confidence that there will be few incentives to cheat or that violations, when detected, will be dealt with swiftly. It emphasizes the necessity for our own security to maintain conventional forces capable of executing whatever tasks they are called upon to perform. It also highlights the importance for the United States of maintaining stability through equality with Russia during any prolonged period of reductions.

The unfinished agenda for arms reductions thus includes significant political and technical challenges. But we have found remarkable agreement, both within and outside the government and in the international community as a whole, that this is the agenda that must be pursued. The consensus about the role and future of nuclear weapons has changed dramatically since the end of the Cold War. Many once almost unthinkable policy options have now become issues of “when” and “how,” not “whether.” Some of the agenda items are controversial and may not be implemented soon. But we are beyond the stage of philosophical debate and into the realm of wrestling to form workable policy choices and strategies to carry them out.

The Global University

Let’s establish some basic principles. First, business is going global. Information, people, and capital flow quickly and copiously without respect to borders. Skilled workers and industrial infrastructure can be found in a growing number of countries. Corporate nationality is becoming less relevant as all the components of a business become portable.

Second, global engineering work can be carried out anytime, anywhere. Centralized, monolithic engineering operations will give way to integrated project teams (IPTs) that will incorporate workers from across the globe. Work will be handed off “down-sun” in sequence to team members around the world, so that work on individual tasks progresses continuously around the clock.

Third, the profile of the global engineering workforce will be driven by the changes in engineering practice just described. As global skill levels rise, Americans will comprise a smaller percentage of the global engineering work force. Employers increasingly will hire not degrees per se but knowledge, capabilities, and skills; and they will have reliable ways to test for standards of knowledge and skill.

As leaders in industry and academe, we have seen numerous reports about improving engineering education. By and large, they call for little more than minor adjustments or additions to current programs. Recognizing that tinkering at the margins would not be enough to meet the challenge of the changing industrial structure, we jointly convened a “summit” of leading industrialists and educators, who spent two full days in intensive exploration of the forces affecting the engineering profession at the beginning of the 21st century and what this means for the profession. The aim was to formulate a new model of engineering education that will better meet the current and future needs of multinational companies and the global engineer.

The result of this brainstorming session was a new “Model of the Global University.” Here again, some basic principles must be set forth. First, in the global environment we just described, academe and industry will converge. Just as industry follows the market, universities must follow industry, locating campuses close to the customer, around the world. Just as industry molds its organization and its product offerings to the needs of the customer, so the academic organization will reconfigure itself to conform to the educational needs of students, with a particular focus on practicing engineers and scientists

To say that industry and academe will converge is not to imply that they will merge. The mission of the university will continue to revolve around basic research; broad education; the maintenance and dissemination of knowledge in an organized fashion; and a focus on educational processes and technologies.

Second, education will become continuous. For the global engineer, education is a continuum, not just a period of formal learning. As engineers mature personally and professionally, most find that they first require broader knowledge of other scientific and technical disciplines, then management skills, and ultimately the kind of wide-ranging humanistic knowledge that leads to greater personal development.

Third, educational standards will become more important. With engineers working on decentralized teams, with hiring decisions being made remotely, with education being delivered at remote campuses, the ability to reliably convey and recognize specific capabilities will become crucial. Recognized standards of educational delivery and achievement will be the academic equivalent of product quality assurance, going far beyond today’s broad accreditation criteria.

The model of the global university is a logical response to these changes. In this vision, the university reshapes itself structurally to resemble its primary client, industry. The central core campus is still responsible for basic education of entry-level students, for fundamental research, for educational-process innovation, and for management of the system-wide research and education enterprise. But much of the actual delivery of the educational product occurs at branch campuses and remote sites around the world that are located in close proximity to large industrial sites and areas of major industrial activity.

Each branch campus is a regional institution serving either a single large corporate-customer installation or a cluster of companies. It provides educational programming to nontechnical personnel such as managers as well as to scientists and engineers. Classroom formats can include interactive faculty-led, faculty-facilitated multimedia, and distance learning with and without an on-site instructor. Classes are open to local undergraduates and “transfer” students from the central campus as well as to company personnel. In addition to training students, the branch campus provides “technology park” facilities and services tailored to the needs of local industry customers.

This working industrial interface also allows the branch campus to provide educational “raw material” that is generalized and codified into educational programming at the central campus, industrial experience and project teaming opportunities for students and faculty from the central campus, and a conduit for industrial practitioners to participate in education as instructors, curriculum developers, and mentors.

Each remote site is a small-scale learning center focused on the educational needs of a single corporate customer. It provides multimedia access to educational programming as well as some advanced instruction by faculty as appropriate. It may be collocated with the customer.

Branch campuses and remote sites alike can be located anywhere in the world. Both are equipped for distance learning and can be networked into central campus multimedia educational programming. Faculty and students as well as educational material such as courseware flow into and out of the central campus, and to a lesser extent between branch campuses and remote sites.

The university will serve as a clearinghouse for knowledge and will certify educators who can organize and impart that knowledge in the most effective way possible. Thus, as the university makes the transition into the global model, the effect will be to provide a new dimension of educational support for the global corporation. That new resource will strengthen the global corporation and actually help to accelerate the changes occurring in industry and in engineering practice.

The model of the global university described here will bring about a number of changes in the overall configuration of the university system. For example, a tendency toward specialization may occur as institutions focus, for marketing and economic reasons, on their core competencies. “Franchising” of educational programs by an institution, either to commercial service providers or to other universities, is one possible response to this specialization.

Collaboration on the granting of academic degrees by universities will increase at the same time that emphasis on degrees by industrial employers will diminish. With greater standardization of the educational product, educational content will be more uniform and grading more objective. Education can be tailored to the individual, and the details of an individual’s educational itinerary, combined with project experience, will present an accurate professional profile of the person.

The involvement of more industrial practitioners directly in the delivery of engineering education, although highly beneficial for education, will also alter the employment patterns and profiles of faculty. Tenure policies will be affected, and alternative academic employment patterns will emerge.

As the demand by industry for this new dimension of educational support grows, those universities that adapt to meet the demand will thrive; those that do not will become less and less relevant. Over time, then, it is likely that the number of academic engineering programs in the United States will decline.

Other potential implications may be envisioned, and undoubtedly many surprises await. But we believe that the model will work–and work well. More than that, we believe it must be pursued. Global engineering is already a reality. Engineering education and the education system must adapt to that reality.

The Power of the Individual

The life of Leo Szilard has important lessons for scientists eager to influence public policy.

William Lanouette’s fascinating biography of Leo Szilard, Genius in the Shadows, does more than reveal the life of a brilliant physicist and maverick social activist; it sheds a perceptive light on the role of scientists in public policy. World War II is usually recognized as the coming of age of science in U.S. politics. Albert Einstein had become the world’s first science celebrity and a person to whom presidents felt obliged to listen. The Manhattan Project to develop the atomic bomb was an unprecedented federal investment in research, and questions about how to use the insights of nuclear physics for military and civilian purposes brought scientists into direct conversation with the nation’s leaders. And it was at this time that Vannevar Bush laid the foundation for a postwar science policy that would put government in the dominant role in funding basic research.

Some scientists see the period after the war as a golden age when scientists, or at least physicists, were treated with deference in the corridors of power. They wonder why the influence of scientists has not grown with the expanding importance of science in all aspects of modern life. In fact, scientists have become more influential in policy debates concerning health, energy, the environment, transportation, and other areas. There may not be the same sized headlines as when Robert Oppenheimer testified to Congress about nuclear weapons, but there are far more scientists actively influencing public policy. In addition, policymakers and the public have become much better informed about science. Scientific literacy is not what it should be, but we have to remember that nuclear physics was a complete mystery to virtually all Americans in the 1940s. Besides, the science was developing so fast that even the scientists at the forefront were often taken by surprise. As late as 1939, even Enrico Fermi, who directed the team that created the first nuclear chain reaction, did not believe that such a reaction was possible.

What is instructive about Szilard’s life, however, is not the political influence of scientists as a group. Szilard’s efforts to convince the government to develop nuclear weapons and his subsequent campaigns to establish civilian and international control of the power of the atom are an inspiring example of how a determined individual can play a major role in public policy. He believed that scientists should have more influence in policymaking in general-not because of their knowledge but because of their ability to think rationally. This faith in reason was a weakness in Szilard’s political thinking, however, because it prevented him from understanding the emotional forces that must also be taken into account. Indeed, it was the scientific hyperrationality of someone like Szilard that Roald Hoffman had in mind when he wrote “Why Scientists Shouldn’t Run the World” (Issues, Winter 1990-91).

But Szilard was not expecting to be influential in policy debates just because he was a scientist. An avid newspaper reader, he was extremely well informed about public affairs. And although he often used the reputation of his friend Einstein to gain access to decisionmakers, he believed firmly that it was the power of his ideas that deserved attention. He felt the same way about science. Even as an unemployed and relatively unknown physicist, he expected the giants in the field to respect his ideas if they made sense. In fact, he approached biologists in the same way in spite of his total lack of training in the discipline.

The key to Szilard’s effectiveness and influence was that his sense of responsibility for making the world a better place compelled him to work so hard to advance his ideas. Once he decided that something should be done, he devoted enormous energy, resourcefulness, and chutzpah to advancing his proposal. He didn’t assume that he should be listened to just because he was a brilliant physicist, and he accepted that even the most enlightened thinking had to be promoted vigorously to be influential. Of course, it didn’t hurt that he was way ahead of his time in recognizing the threat posed by Hitler, the importance of nuclear weapons, and the problems with nuclear weapons that would arise after the war.

Not everything that Szilard advocated was wise; reason sometimes overwhelmed common sense. And although we can admire his intelligence and enthusiasm, Szilard’s compulsive travel, social idiosyncrasies, and driven personality are not a model one would want to see widely imitated. Still, his life illustrates important lessons for scientists who want to influence public policy. First, the most important policies are those that address issues bigger than science itself. Szilard studied and cared deeply about the larger issues of governance, not just the role of science. Second, he understood that his scientific training did not entitle him to influence and that the quality of his thinking did not mean that the world’s leaders would come knocking at his door. He knew that to make a difference in the world it is necessary to think broadly; to win support through compelling analysis, not reputation; and to work tirelessly to promote one’s ideas.

What Szilard did was to approach public policy with the same rigor, determination, and persistence with which good scientists approach science. What works in advancing science can also work in improving policy.

The Politics of Education Reform

The recently released Third International Mathematics and Science Study (TIMSS), which made international comparisons of math and science performance among fourth- and eighth-grade students, strengthened the case of those who are calling for ambitious reform of U.S. education. U.S. fourth graders did relatively well in science and about average in math; eighth graders did slightly better than average in science and slightly below average in math. These findings are consistent with other assessments of U.S. student performance.

The TIMSS study also provided new and valuable information about the relationship between instructional practice and student performance. The message to U.S. educators was clear: science and math education needs to be better focused and more rigorous. Although one can still hear arguments that international comparisons are not fair, that the diversity of the U.S. population or the pluralistic nature of its political culture makes it impossible to replicate the coherence found in other countries’ schools, or that U.S. schools are already improving at an acceptable pace, the reality is that the majority of the public, of elected officials, and of educators believe that change is needed. The task is to determine what changes are necessary to make a real difference to students and how reform can be achieved in the U.S. political culture.

The evolution of U.S. education reform

U.S. elementary and secondary education is a vast and extraordinarily complex enterprise that seems to defy simple generalizations. However, the two central imperatives of U.S. educational governance are dispersed control and political pluralism. I have chosen my words carefully here. I use “dispersed” control rather than the more conventional “decentralized” control because I do not think that control of education is actually decentralized in the United States. The notion of local control of schools is, I think. largely inaccurate and outmoded, especially in light of the direction education reform has taken in the past decade. The idea of political pluralism is more straightforward. It captures a fundamental principle of U.S. politics-that political decisions and actions are the result of competing groups with different resources and capacities vying for influence in a constitutional system that encourages conflict as an antidote to the concentration of power.

The story of U.S. education reform since the early 1980s is worthy of either a Gilbert and Sullivan operetta or theater of the absurd, depending on your tastes. In 1983, the National Commission on Excellence in Education releases A Nation at Risk, focusing public attention on a “crisis” of low expectations, mediocre instructional practice, and menacing foreign competition; thereby legitimating a nascent education reform movement that has already begun in a handful of states. From the beginning, it is fairly clear that there is little the federal government can actually do to fix this crisis, because the ideological climate is running against a strong federal role. By the mid-1980s, with many states gearing up to take on the issue, the National Governors Association, under the leadership of a politically ambitious Governor Clinton of Arkansas, promotes the idea of a “horse trade”-greater flexibility and less regulation for schools and school systems in return for more tangible evidence of results, reckoned mostly in terms of student achievement. This is followed by another spate of state and local reforms aimed at deregulation, government restructuring, and tighter state monitoring of student achievement.

In 1989, an extraordinary event occurs: President Bush and 50 governors meet in Charlottesville, Virginia, to draft national goals for education. This Education Summit inaugurates an all-too-brief period in which there appears to be broad bipartisan support for some sort of national movement to support explicit state and local goals and standards. This consensus results in Goals 2000, a Clinton administration initiative with striking similarities to a prior Bush administration proposal. There then ensues a complicated and largely unsuccessful attempt to translate the apparent national consensus on goal-setting into an institutional apparatus that puts the federal government in the role of enabling state and local action. Beginning in the Bush administration, the federal government also gets into the business of lending financial and political support to professional associations to draft national content standards in subject matter areas.

Policy talk hardly ever influences the deep seated and enduring structures and practices of schooling.

Then “whammo,” with the congressional election of 1994, a seeming ideological reversal occurs on anything vaguely resembling federal action on goals and standards, followed by an unraveling of the earlier bipartisan consensus, some ungraceful wrangling over the funding and implementation of Goals 2000, and some stunningly adept pirouettes by right-leaning previous advocates of standards who overnight become critics of standards and advocates of local control. During this period, the education profession gets an introduction to bloody-nose politics. The carefully crafted history standards are shot down in debate in the Senate and their drafters are sent back to try again. The drafters of the English/language arts standards are held up to public ridicule for their inclusion of deconstruction theory. As if to show the final absurdity of the standards debate, Governor Pete Wilson of California vetoes funding for the state’s ambitious new student assessment system after a blistering debate about its content (too multicultural) and its feasibility (too little data on how individual students are doing). From 1992 onward, the standards movement has been declared officially dead at least once a week.

The national debate on educational standards has not been pretty to watch, but it has embodied a faithful enactment of the principles of dispersed control and political pluralism. The temporary bipartisan consensus on goals and standards after the Charlottesville summit concealed deep suspicion of anything national or federal in matters of curriculum and student learning. It did, however, demonstrate that a coalition of national decisionmakers could, however temporarily, presume to make authoritative judgments about the purposes of schooling. Control of education, it turns out, is only local when schools and school systems appear to be doing the right thing; when they’re not, they are fair game for elected officials, at whatever level of government, with a political interest in their performance. Likewise, the unseemly tussles over the California assessment system and the history and English/language arts standards demonstrate that issues of professional practice are vulnerable to the most basic form of pluralist politics-groups mobilize against proposals they regard as invidious to their interests, without regard for the professional or political authority those proposals carry.

Moving toward standards

What’s most interesting is not that standards inevitably provoked partisan and pluralist debate but that despite this debate, professional organizations, states, and localities continue to plod ahead with the development of standards in a tremendously varied way that fits remarkably well with the principle of dispersed control. Many states and localities are developing and implementing content and student performance standards despite, and often in response to, partisan criticisms. In addition, as standards have become a more prominent part of political discourse in states and localities, the range of actors involved in their development and revision has expanded to include many groups that were not involved in their early formation. States have also been engaged in a broad effort to develop and implement statewide testing programs (many of which antedate the current standards debate) that deliver, with increasing precision, data on student performance in individual schools. Meanwhile, NAEP has become more and more visible in its periodic statements of what U.S. students know about core academic subjects. NAEP now provides state-to-state comparisons, which would have been unimaginable 20 years ago. Finally, despite the highly visible partisan debate over the history and English/language arts standards, other efforts to develop content and performance standards have been much more successful. The math standards developed by the National Council of Teachers of Mathematics and the science standards developed by the National Academy of Sciences are viewed with increasing respect by educators and politicians. The New Standards project, a private nonprofit organization financed by private philanthropies, has recently released a comprehensive set of content and performance standards in mathematics, English/language arts, and applied learning that are increasingly seen as benchmarks for state and local standards-development activities.

Some fundamental changes have occurred in education policymaking at state and local levels over the past decade or so. A decade ago, only a few states and a relatively small proportion of localities collected and reported data on student test performance at the school level. Now, virtually all states and localities have the capacity to collect and report school-level student performance data, and in most states these data are now reported publicly once a year. Thus state and local policymakers and the public at large now have routine access to some sort of data on how individual schools are performing.

A decade ago, most states did not have formal policies that set expectations for measured student performance nor did they have policies that dealt directly with the content of academic instruction. Now, more than half the states are in the process of developing explicit policies about acceptable student performance levels on statewide tests as well as curriculum guidance about what should be taught.

A decade ago, most states viewed their role as setting broad, minimum, largely procedural requirements for local districts to follow in delivering education and dispersing state revenue to local school districts. No state, as far as I can tell, intervened directly in the affairs of individual schools, except in extraordinary cases of incompetence, or challenged the authority of local school districts to serve the students in their communities. Now, many states have adopted a much different posture. Some states have instituted inter-district choice programs that allow students to move across district boundaries, often taking state money with them. Many states have the authority to declare schools or entire districts deficient and assume temporary control over them. And many states directly authorize the creation of publicly supported “charter schools” operating outside the ambit of many state and local regulations.

A decade ago, it was virtually impossible to compare states in terms of useful measures of student performance. Now, with the development of state-level results by NAEP and the disaggregation of international student performance data to the state level, it is possible not only to compare states against each other but against other countries. As one might expect, these comparisons are met with much criticism and gnashing of teeth regarding how states differ in their student populations, but once the data are available, it is impossible to prevent comparisons.

A decade ago, a teacher, principal, or district curriculum specialist looking for the cutting edge of curriculum and instructional practice in a given content area would probably have consulted a teachers’ magazine or the curriculum collection in the neighboring education school library. In a few rare instances, such as the network of practitioners that formed around high-school advanced placement courses, teachers would be exposed to a fully developed curriculum and a group of colleagues trying to learn how to teach it. Now, many educational practitioners are exposed to a virtual blizzard of leading-edge advice on curriculum and pedagogy, such as national content standards sponsored by professional associations, state curriculum frameworks, and staff development consultants purveying what they consider to be the latest ideas about instruction. It is true that the penetration of these ideas and materials into the classroom is often superficial, that most schools probably still exist as isolated islands of practice, and that most curriculum and staff development materials that are available to most teachers are still of a decidedly mediocre sort. But the important shift from a decade ago is the current existence of a relatively well-organized, extensive professional community, with strong incentives for self-promotion, producing explicit instructional guidance on the leading edge of practice. What’s most remarkable about this growing industry is that it is setting standards of practice that are calculatedly beyond what the average teacher can do, calibrated instead to what students ought to learn.

The principle of dispersed control leads me to predict that states will continue to push toward state-to-school accountability measures until they can muster evidence on student performance that allows them to make a persuasive argument that they are discharging their political and fiscal responsibilities. States and localities vary widely in their capacities and in their political incentives to engage in standard-setting, and therefore the result of this dispersed activity will, at least in the short term, be a high degree of variability in standards from one place to another and (ironically) less standardization of policy and practice from a national perspective. Local districts and the federal government will increasingly become spectators in this state-to-school struggle unless they can find some way to participate in it productively.

The principle of political pluralism leads me to predict that political debate about the content of standards will probably continue, especially in highly contentious areas such as history and literacy, because content is such an attractive target for organized interests. But this debate will increasingly become a sideshow in the larger standards game. Schools, as they are subjected to increasing pressure for accountability, will reach for content and performance standards in order to simplify their task and reduce uncertainty and will find ways to submerge and deflect debate over the content of standards so they can get on with the task of satisfying state and local accountability pressures. The principle of political pluralism also leads me to predict that professional communities and commercial and nonprofit enterprises will become increasingly prominent in supplying advice on curriculum and pedagogy in response to pressures on schools for increased accountability for student performance, further fueling the press for standards.

Notice that there is no necessary coherence at the national or state level in this scenario, at least in the short term. It doesn’t even suppose that there will be coherent goals, standards, and instructional guidance from states to schools, although it will be extremely difficult for states to maintain pressure over the long run on schools if they can’t provide some sort of coherence in their expectations. It does suggest, however, that schools will be subjected to constant pressure for the foreseeable future to focus on demonstrable student learning and to seek external guidance from states, professional communities, and commercial enterprises about how to solve the difficult problems of what to teach and how to teach it. Without countervailing forces from the national (notice I didn’t say federal) level, variable capacities and incentives at the state and local level will probably produce more variability in policy and practice and could produce more variability in student performance.

The TIMSS findings and reform

The authors of TIMSS are careful to point out that their findings largely antedate most recent activity in states and localities concerning standards-based reform. Hence the findings are, in effect, baseline data on the state of instructional practice and student performance in math and science. The picture they give is, I think, exactly what one would expect from a system of dispersed control and political pluralism running on autopilot. In the absence of explicit external standards for content and student performance, teachers give great weight to the way content is portrayed in textbooks, which is the “default mode” for instructional guidance. Commercial publishers have little or no incentive to focus content; their incentives are to produce materials that are marketable to the broadest possible cross-section of customers and to gear content to the largely content-free nature of existing standardized tests. Administrators at both the school and system levels have little or no political incentive to engage in explicit instructional guidance for teachers; their main job is to orchestrate, deflect, and buffer the multiplicity of organized interests that try to influence schools.

Hence the knowledge that is enacted in curriculum and pedagogy becomes a byproduct of the political incentives that operate on teachers-discrete bits of information, emphasis on coverage rather than depth, diffuse and hard-to-understand expectations for student learning, little convergence between the hard day-to-day decisions about what to teach and the largely content-free tests used to assess student performance, and a view of pedagogy as a function of the personal tastes and aptitudes of teachers rather than as a function of external professional norms. Students who do well in such a system recognize that they are being judged largely on their command of the rules of the game, which reward aptitude rather than sustained effort in the pursuit of clear expectations. All systems have a code; the job of the student is to break it. Some do, some don’t.

Instructional change is difficult, demanding, and unfamiliar work fore teachers, students, and administrators

It seems plausible that if one were to impose content and student performance standards, we well as assessments geared to those standards, on this system, school and system-level administrators would focus more on instructional guidance, the variability of instructional practice among teachers would decline, students would receive clearer expectations about what they are supposed to learn, and presto! student performance would both improve and become less variable. This is the underlying theory of standards-based reform. There are two major problems with it. One is that standards-based reform doesn’t displace or override the principles of dispersed control and political pluralism; it blends with them in ways that we understand only imperfectly. The result of this blending might well be, at least in the short term, increased variability in both instructional practice and student performance as states, localities, and schools struggle to adapt to standards under conditions of variable capacity and political incentives.

The second problem with the theory is that instructional change is difficult, demanding, and unfamiliar work for teachers, students, and administrators. Standards reform requires fundamental changes in the way education is practiced and governed. It requires teachers, students, and administrators (not to mention parents) to accept explicit external standards for what constitutes acceptable content and performance. It requires teachers not just to teach to these standards but to learn how to teach in ways most of them have never done before. It requires administrators to redesign their jobs and their organizations to focus on continuous improvement of instruction in classrooms and schools rather than primarily on managing the political environment of schools. It requires teachers and administrators to make hard judgments about whether their colleagues are meeting performance expectations and whether, if they fail to do so, they should be given additional assistance or encouraged to find work elsewhere. It requires teachers and administrators to deal with the inevitable frustration and anger that will come from parents and the public when students and schools are found not to be meeting the requirements of external standards. It requires school governance authorities such as school boards and state legislatures to maintain a commitment to standards over time and to allocate the resources and authority to teachers and administrators that are necessary to sustain this commitment in the face of the inevitable dissent and conflict that will accompany explicit judgments about the performance of students and schools. And it requires education practitioners and school governance authorities to manage and adapt performance standards on the basis of hard evidence about whether they are working to promote student learning. This is just the simplest list of what standards-based reform entails.

Connecting policy and practice

In their profound analysis of the history of U.S. education reform, David Tyack and Larry Cuban note the persistent gap between what they call “policy talk” and the world of daily decisions about what to teach, how to teach, and how to organize schools. Most reforms, they argue, exist mainly in the realm of policy talk-visionary and authoritative statements about how schools should be different, carried on among experts, policymakers, professional reformers, and policy entrepreneurs, usually involving harsh judgments about students, teachers, and school administrators. Policy talk is influential in shaping public perceptions of the quality of schooling and what should be done about it. But policy talk hardly ever influences the deep-seated and enduring structures and practices of schooling, which I have called the “instructional core” of school.

The Tyack and Cuban analysis, I think, accurately captures the way Americans have historically dealt with education reform. The debate on standards-based reform, though, opens up the possibility of dealing with reform in a way that establishes a more direct connection between policy and practice. Never before in U.S. history has there been such a broadly based conversation about these matters. But although the conversation is about important issues related to the core of schooling, it is still largely policy talk because it has yet to address at least two important questions related to sustained improvement of the instructional core: What new knowledge and skills do educational practitioners need to teach to ambitious standards of student learning? And what incentives do practitioners have to engage in the hard work of acquiring and using this knowledge and skill?

“What do I teach on Monday morning?” is the persistent question confronting teachers. Because they are inclined to ask such questions, teachers are often accused by researchers, reformers, and policymakers of being narrow and overly practical in their responses to the big ideas of education reform. Given the state of the current debate on standards-based reform, though, I think the Monday morning question is exactly the right one, and it should be firmly placed in the minds of everyone who purports to engage in that reform.

Consider the following practical issues. Most statements of content and performance standards coming from professionals and policymakers take no account whatsoever of such basic facts as the amount of time teachers and students have in which to cover content. They are merely complex wish lists. In order to be useful in answering the Monday morning question, they have to be drastically pared, simplified, and made operational in the form of lesson plans, materials, and practical ideas about teaching practice. Furthermore, most standards fail to take account of the drastic differences among schools in the type and level of schooling, student populations, resource levels, and the makeup of the teaching force. The problems of implementing standards are also vastly different at the elementary and secondary levels. Elementary teachers usually teach everything. Secondary teachers tend to specialize in particular content areas. Using standards to inform instructional practice at the elementary level can seem impossibly complex to most teachers. I have often visualized this problem by imagining a pile of standards documents, cutting across math, science, English/language arts, and social studies, on the desk of the typical elementary-school student, with the teacher trying to figure out how to reduce the pile to a manageable size and engage the student in some productive work. Most sane people would not stand for this task, yet we expect teachers to willingly engage in something like it. The problem of secondary-school teachers is somewhat different. Although the pile of standards is still impossibly large for a given teacher and collection of students in a given classroom, teachers face the additional problem that we expect them to override years of experience in teaching with a collection of external prescriptions about how they ought to teach. Most sane people would not stand for this task either. They would expect, at the very least, to have an intelligent conversation with someone about why these prescriptions should be useful to them and how they should get from where they are in their current understanding of content to where the standards say they should be.

On this subject, it seems to me that the TIMSS findings carry an important and powerful message: the need for parsimony. The findings paint a picture of scattered and largely shallow coverage of content. Getting from standards as they currently exist to practice focused on a deep understanding of key ideas requires reducing external prescriptions to the minimum possible level and focusing them on the most important aspects of instruction. This idea sounds appealing when stated as an abstract principle, but someone has to engage in the hard and controversial job of throwing out vast amounts of repetitive, overly prescriptive, and distracting bits of content; and someone has to deal with the problem of political pluralism whereby anyone with an idea, no matter how half-baked, about what should be taught can organize a political movement to get it written into the official curriculum. The hard reality is that U.S. schools have no processes in place for making these difficult political judgments, much less making them binding. So getting standards pared down to a manageable level of complexity requires the development of a new way of making curriculum decisions in states and localities-one that holds feasibility in high regard while respecting the enormous pressures of political pluralism. Imagine a world in which state legislators, school board members, and local superintendents play an active role in making the hard judgments that enable teachers to answer the Monday morning question. This is much different from the one in which we presently live.

Suppose we were somehow able to answer the Monday morning question and create a system of standards that values parsimony over undisciplined pluralism. Here the brute facts are even more daunting. The work day of most teachers allows them virtually no time to engage in any sustained learning about how to do their work differently. Their time is fully scheduled during the school day, with the exception of a few brief and scattered preparation periods. The time available in the summer is time when students are typically not in school, so the learning that occurs then is largely done in isolation from actual practice in classrooms. Organized professional development, if it occurs at all, takes place in most local school systems during a few days scattered throughout the school year or in after-school sessions, again in isolation from actual practice. Most professionals learn new practices by working with other professionals in close proximity to the details of practice and by making their clients pay for the surplus time required to retool and renew themselves. However, we expect teachers not to learn new practices as part of their daily work life and to sandwich time for learning into spaces in the day and year when students are not there. Every minute of time for professional learning that comes at public expense is begrudgingly granted.

If we want educators to do their work differently we have to reward them for doing the right things.

Time is money in the educational enterprise, as in all others. Creating more time in the space of the school day for teacher learning might mean hiring more people to cover classes when teachers are engaged in learning. Organizing professional development experiences around actual instruction in the classroom with real students means hiring people to consult with teachers or freeing up other teachers to work with their colleagues. Creating manageable instructional materials for teachers to use and adapt in their classrooms requires time on someone’s part to sift through the multiple competing packages to find the best and most appropriate. Some of this time and money can be extracted from the organizational slack that exists in most school systems. But some of it will have to come from additional resources, wisely invested. Who is going to make the difficult adjustments necessary to pry loose existing resources and find new ones? Again, this looks like a job for state legislators, school boards, and local administrators, but it is one for which they are currently ill-prepared.

Now imagine that we somehow solve both the parsimony problem and the problem of how to organize learning for teachers. We are left with the still more daunting problem of how to adapt general guidance for instruction and learning of new practices to the realities of diversity among schools. Would we expect instructional improvement to look the same in a school that serves students from several language groups, many of whom come from homes where there are no computers and few books, as in a school where all the students are fluent in English and come from homes that have computers and are packed with books? Would we expect the same kinds of materials to work for teachers faced with children struggling to understand what school is about as for teachers in schools where students come steeped in knowledge about why they are in school? Would we expect the same instructional content and pedagogy to be successful in a classroom where all but a few students are reading at the same grade level as in a classroom in which the range of reading ability spans several grade levels? To answer the Monday morning question credibly in all schools and classrooms, we have to find a way to make standards responsive to such variations. Again, this sounds like a job requiring a much different kind of leadership from policymakers and administrators. But it also requires the sustained engagement of teachers in understanding how to adapt general prescriptions to the specific conditions of their classrooms, without shortchanging students who are being asked to compete with other students who are not like them.

The knowledge and skill problems presented by standards-based reform are deep and difficult. They cannot be solved by engaging in more sophisticated policy talk, but by sustained engagement between policymakers and practitioners in difficult discussions about resources, expectations, and the realities of diversity in schools and classrooms. If either side of this discussion pulls away from the other, standards-based reforms will go the way of earlier reforms: They will exist largely in the realm of policy talk and, more ominously, will devolve into blaming teachers and students for the failures of everyone.

The incentive problem is equally difficult. The work described above is not only hard and demanding, it is different from the work we have asked teachers and administrators to do in the past. These new definitions of educators’ work may seem self-evident to critics and reformers, but they are far from self-evident to people who work in schools. Educators, like everyone else, do what they are rewarded and reinforced for in their daily life. If we want educators to do their work differently, we have to reward them for doing the right things. Acknowledging that we know almost nothing about how to do this is the first and most important step in understanding how to do it well.

The existing array of standards-based reforms now in place in states and localities contains the first attempts to solve this incentive problem. The idea that schools should be evaluated and rewarded on the basis of student performance is now creeping into policy talk, but it has yet to work its way into educational practice. One doesn’t have to think very hard to understand the destructiveness of this idea. Imagine a world in which, overnight, all schools were rewarded financially, and were ultimately permitted to live or die, on the basis of some measure of gains in student performance based on clear standards. The clearer the standards and the more direct the rewards and sanctions, the worse the consequences. Race, social class, and home environment are the strongest predictors of educational performance for students. Rewarding and punishing schools on the basis of their performance under these circumstances means, in effect, rewarding and punishing them for the students they serve. Worse yet, adjusting rewards and punishments for student background probably means that certain schools will be allowed to continue to have lower expectations for their students than other schools, thus defeating the main purpose of standards-based reform, which is to promote high-quality learning for all students. Under these circumstances, the more pressure we apply in the form of external standards of student performance, the more variability we are likely to create in the very areas where we are trying to reduce it. Still more troubling, the more pressure we apply, the more we are encouraging schools to recruit “good” students and push away “poor” students, and the more we are encouraging schools to blame students and their families for the schools’ failures. If this sounds like a dismal prospect, I want it to. It is a horrendously difficult problem and it needs to be faced squarely by everyone who supports standards-based reform.

I propose a new principle of standards-based reform, which I call “reciprocity of capacity and accountability.” The principle goes something like this: Every increase in pressure on schools for accountability for student performance should be accompanied by an equal investment in increasing the knowledge and skills of teachers, administrators, students, and their families for learning about how to meet these new expectations. In its simplest form, this principle means that no school is judged to be failing until policymakers are satisfied that investments in learning new ways to teach, new ways to manage instructional improvement, and new ways of understanding student and family responsibilities have been implemented and paid for. In its more complex form, this principle means that everyone who occupies a position of formal authority in the educational system should judge their actions against the criterion of value added to instructional improvement.

If schools are to be held accountable for student learning, then the people who run them should be judged by the extent to which they add value to the quality of classroom instruction. So the first diagnosis of school failure should not be directed at teachers and students but at the way policymakers and administrators have organized resources to promote new knowledge and skills in schools. For example, a failing school in which teachers have not had sustained and effective professional development, organized in a way that is directly connected to standards for student performance, is not a failing school. It is a school managed by failing policymakers and administrators. In a system governed by the principle of reciprocity of capacity and accountability, everyone would rewrite their job description in terms of the value they add to the improvement of classroom instructional practice. If you can’t rewrite your job description in this way, look for work elsewhere.

Every school administrator should be judged against the criterion of value added to instructional improvement.

The answer to the question of how we give educators the incentives to do the kind of work required by standards-based reform is that we provide them with the necessary knowledge and skills and we reward them for participating in such activities, in tandem with the implementation of external accountability measures that are designed to reward and penalize schools for student performance. No external accountability measure should be implemented without a specific investment in knowledge and skill designed to improve the capacity of educators to meet that measure. Furthermore, when external accountability measures are found to reward the wrong things, such as rewarding schools for shifting students around rather than educating the students they have, then the measures should be changed. I cannot stress enough that we know little or nothing right now about how to engage in this delicate balancing of capacity and accountability. Until we learn how to do it better, we should be modest in our demands on schools for external accountability measures and ambitious in our attempts to solve the capacity problem.

Environmental Policy: The Next Generation

A generation ago the Cuyahoga River in Ohio was so contaminated that it caught fire, air pollution in some cities was thick enough to taste, and environmental laws focused on the obvious enemy: belching smokestacks and orange rivers that fouled the landscape. Since the time of Earth Day in 4970, we have cleaned up thousands of the “big dirties” through the use of pioneering federal legislation designed to take direct action against these threats to air, water, and land. Now, a generation later, we must confront environmental problems that are subtler, less visible, and more difficult to address: fertilizer runoff from thousands of farms and millions of yards; emissions from gas stations, bakeries, and dry cleaners; and smog produced by tens of millions of motor vehicles. Like nature itself, the size and shape of environmental problems constantly evolve; so too must the strategies, approaches, institutions, and tools chosen to address them.

At first blush many people might conclude from the visible improvements to the environment that we have done our work well and that, except for maintenance, the federal government should move on to other pressing priorities. Others would prefer to see a rollback of environmental legislation, as was proposed in the 404th Congress, in the belief that we have simply gone too far. Even those who support environmental investments might feel that the enormous problems of clean water and air in the world’s developing megacities or habitat destruction in Asia or South America are more important than reforming environmental protection in the United States.

These assessments overlook some important facts. First, many once “quiet” issues are emerging as population densities increase. Second, our understanding of ecological and public health threats continues to change. Substances that were beneficial in direct application, such as chlorofluorocarbons, turn out to be harmful long after they have served their local function. Third, the environmental advances of recent years are not evenly distributed between urban and suburban areas, rich and poor ones, and geographical regions. Fourth, we are just beginning to appreciate how deeply the environment is intertwined with many other issues such as human health, energy and food production, and international trade. Thus, rather than retrench, we must renew our commitment to environmental protection.

Whereas individual reforms are slow and hard-won, collective change can occur rapidly and has made the world a dramatically different place than it was in 4970. Globalization, the dominance of market economies, and the revolution in information technology all greatly alter the setting of environmental policy and require that we pursue it differently than we have before. We must recognize the competing desires that citizens everywhere have for a cleaner environment and other things: mobility, economic growth, jobs, competitive industries, and material comforts. Environmental policy cannot be made in isolation from other issues. Policies in tune with the people whose lives they are meant to serve increase the prospects for winning the public and political support necessary to effect change. We need a systems approach built on rigorous analysis, an interdisciplinary focus, and an appreciation that context matters.

Environmental law and good intentions

The first generation of environmental policy was built on a complex system of environmental law that separates environmental problems by media (such as air and water) and by class (such as pesticides or hazardous materials). At the heart of key legislation such as the Clean Air Act and the Clean Water Act is a system of setting standards to regulate emissions to air, water, and land established by federal administrative agencies. Most often, the states are required to translate federal goals into facility-specific legal requirements. Commonly referred to as a “command and control” system, it means that government both commands what the pollution reduction targets should be and also controls, in much regulation, just how these targets will be met.

Many are quick to reject out-of-hand the complicated legal structure that has evolved. But no one of these approaches-standard setting, dividing up problems, delegating implementation-is wrong. Indeed, separating the work of environmental protection into air, water, waste, and other subdivisions makes the problems more tractable and accessible. Setting specific standards requires everyone to play by at least some of the same rules. And when the target is on the right problem, such as the health effects of lead and the decision to prohibit leaded gasoline, the results can be impressive. Indeed, these approaches provide a useful starting place for today’s environmental protection efforts.

At the same time, the complex structure of separate and sometimes conflicting laws and very detailed and often rigid regulations to deal with them has trivialized some of the most important legislative goals. Consequently, some aspects of compliance seem marginal or even counterproductive. Most important, the current approach often leads to fragmentation. It becomes extremely difficult to reassemble the parts to look at them in ways that allow for new thinking and the integration of new information. In the words of policy scientist Harold Lasswell: “Fragmentation is a more complex matter than differentiation. It implies that those who contribute to the knowledge process lose their vision of the whole and concern themselves almost exclusively with their specialty. They evolve ever more complex skills for coping with their immediate problems. They give little attention to the social consequences or the policy implications of what they do.”

Within the US environmental protection program, fragmentation has taken its toll in three key areas: overemphasis on the pieces at the expense of the whole, disregard for problems in sectors not considered environmental, and neglect of new problem areas that fall outside of the regulatory net.

Pieces and the whole

By overemphasizing the role of single chemicals and single media in pollution policy and of single species in land management policy, we underestimate the interactive effects of chemicals, the cross media effects of emissions, and the interdependence of habitats. For example, pollution does not respect legislated boundaries such as air, water, and land. Sulfur dioxide released into the air, even by a tall smokestack, does not disappear, but can come back as acid rain that threatens lakes and forests. If we trap emissions before they leave the smokestack, we create a sludge that becomes a hazardous waste disposal challenge. Fragmented law fails to account for instances in which pollution is merely shifted from one place to another rather than reduced or eliminated.

In the same vein, knowing the effects of individual chemicals is not a basis for understanding how these chemicals will act together. In switching from DDT to seemingly safer organophosphate pesticides, we studied the neurotoxic effects of each new product, but we now suspect that the combined impacts are much greater than the individual effects would suggest. When we focus on a single species, such as the spotted owl, we miss the proverbial forest for the trees; the loss of one species is often a signal of significant alteration to an entire habitat or ecosystem.

Organizationally, overemphasis on pieces leads to the creation of separate professional specialties and, many times, to separate bureaucratic units in the government. These units are also mirrored in industry and in the environmental advocacy community. On the one hand, much knowledge can be generated through a targeted focus; on the other hand, organizational culture can act as an important impediment to change. We start to think that each bureaucracy can handle its own environmental insult. When the Environmental Protection Agency (EPA) and the state departments of environmental protection do not solve environmental problems, we conclude that these agencies are broken and must be fixed.

To the contrary, these agencies have been hard at work on the specific problems they have been assigned: the 43 statutes that EPA administers, the delegated responsibilities of the states, and the additional responsibilities state departments have taken on in response to local needs. Therefore, calls to reinvent EPA or simply to devolve or deregulate are off track. It is not a matter of restructuring EPA or offering incentives for them to try harder; it is a matter of doing things differently.

Current policy focuses on pieces at the expense of the whole and neglects new problems in areas that fall outside of the regulatory net.

Disregarding environmental problems elsewhere

Today, environmental quality depends fundamentally on choices made well beyond the realm of environmental decisionmakers in numerous other sectors. Even a look at the government roster reveals many others besides EPA with environmental responsibilities. Open up any one of those boxes-the Department of Agriculture, for example-and you will find thousands who are involved with environmental quality: farmers, food processors, pesticide manufacturers, grocery wholesalers and, of course, shoppers. What we must recognize in the next generation is that EPA and its state counterparts are smaller pieces of a much larger environmental protection system.

In the next generation of policymaking, the issues of other sectors will dominate more and more. To date, public policy in agriculture has amounted to commodity policy, largely ignoring environmental threats to land and water. Transportation issues lie at the center of good land use planning as well as successful management of air emissions and water runoff. Consider the impact on the environment of the restructuring of the electric power industry. If environmental spillovers are ignored, highly polluting coal-burning plants can offer more competitive prices than cleaner power sources. But this does not represent efficiency; it demonstrates market failure which leaves us all losers.

The rise of the service-based economy-now some 75 percent of the U.S. gross domestic product and some 80 percent of jobs in industries such as telecommunications, health care, banking, insurance, and distribution-stands out as another under-attended-to sector. With such a strong emphasis in first generation environmental law on manufacturing plants, we are unsure of how to approach a sector in which the pollution is less obvious than in the smokestack industries. When we think of making steel, we imagine pollution. When we think of hospitals delivering health services, we do not immediately focus on the difficulty of disposing of hypodermic needles or radioactive waste.

Yet service companies such as Federal Express and United Parcel Service have changed how business does business with regard to warehousing and logistics. Consumers have become accustomed to overnight delivery but the tools of environmental analysis have not been turned toward comparing, for example, the amount of gasoline and jet fuel it might have taken to mail order a sweater from a catalog in one day instead of two, compared to driving downtown or to a regional mall to purchase the same one. We are just beginning to consider the new set of environmental management issues raised by various elements of the service economy.

Neglecting new problems

The challenges we confront today-atmospheric buildup of carbon dioxide and other greenhouse gases, the potential environmental impacts of genetically modified organisms, and the risk of exposure to trace residues of pesticides that might disrupt endocrine cycles within a human body-were not even contemplated by first generation environmental laws. The ability of science to detect phenomena has grown exponentially since the first generation and this knowledge should be very useful in focusing us on potential new harms.

But even after science has detected a problem, it is not always easy to get it into the environmental policy hopper. By shining the regulatory spotlight so intensely on only a few issues-what some have called an inch wide and a mile deep-we miss many more. It can take years to recognize emerging issues through conventional government channels. Even then there is no assurance that we will have the tools to deal with the problems identified. We are most often left applying old methods to new problems or trying out new methods with great uncertainty concerning hazards, risks, costs, and benefits.

In Keeping Pace with Science and Engineering: Case Studies in Environmental Regulation, the National Academy of Engineering catalogs the often unsatisfactory results when laws lag increases in knowledge in areas such as nutrient loadings in the Chesapeake Bay, tropospheric ozone, and acid deposition. Uncertainties are high, almost by definition, because the problems that environmental regulations try to address are at the cutting edge of current scientific understanding. All other things being equal, concludes J. Clarence Davies of Resources for the Future in Washington, D.C., the more new scientific information threatens the public and private sector status quo, the longer it takes to incorporate that new information into decisionmaking.

Future policy must recognize shades of gray creating incentives for good performance but still holding laggards accountable.

Environmental politics

The politics of first generation environmentalism was confrontational in style and polarizing in practice. It found villains and named names. It pitted the economy against the environment. Now we recognize that environmental protection cannot be boiled down to a struggle between the “good guys” (environmental activists) and the “bad guys” (big industry). The corporate world is not monolithic with regard to environmental performance. Some companies take environmental stewardship very seriously while others pollute with abandon. The next generation of environmental policy must recognize shades of gray, create positive incentives for the leaders, and still hold the laggards accountable.

Once we accept a systems view, our political thinking necessarily changes. Beyond the point source polluters-the largest factories-are the thousands of smaller firms and farms whose releases are individually very small but cumulatively very large. There are millions more of us whose everyday activities, from our lawns to our cars, add to this cumulative impact. Politically, it is far easier to clamp down on a few thousand big businesses than it is to reach each citizen. Although poll after poll shows that some 80 percent of Americans consider themselves to be environmentalists, we do not always act like it. Environmentally, there is great truth to the comic expression: “We have met the enemy and it is us.”

Next generation approaches and tools

We have just released a study aimed at reconfiguring the U.S. environmental reform debate called Thinking Ecologically: the Next Generation of Environmental Policy. What should we actually do as a result of thinking ecologically and who should carry out the agreed upon policy decisions? Our four central recommendations for ecological policy are: Do not focus only on EPA and the government, but on the critical roles of other actors and sectors; move from heavy reliance on command and control approaches to include more flexible tools; recognize the potential of the market as an ecological model that is dynamic and flexible; and adopt systems approaches such as industrial ecology and ecosystem management that foster an examination of context and address interconnections rather than singular phenomena.

Reaching beyond the traditional environmental enforcement community is essential. Environmental protection cannot be, as past efforts were, so dependent on government as initiator, implementer, and enforcer. The spectrum of environmental decisionmakers is very broad and includes mayors, transportation system designers, route planners for overnight packaging companies, farmers, energy marketers, and international trade negotiators. The flowering of nongovernmental organizations plays an especially important role in the environmental arena. Grassroots activists demand local protection and more broadly chartered groups, often with strong analytical capabilities, demand better government and industry performance nationally and internationally. Finally, ecological thinking must become everybody’s business as each of us considers where to shop, what to buy, how much to drive, where to live, and what to throw away.

The success of recycling programs across the country demonstrates the potential for mobilizing the public. Other initiatives that have the potential to increase efforts by individuals toward environmental protection in the next generation are those that allow for informed choices. Eco-labels, similar to nutrition labels, present information to consumers and allow them to choose between environmentally responsible products and those inattentive to environmental impacts. Soon, a large number of consumers may be able to buy “green energy,” electricity derived from sources such as wind power or photovoltaics that are less damaging to the environment than energy from fossil fuels. Although the size of the market for green energy is unknown, many private companies are very interested in its potential.

Participation by the private sector is essential to the success of next-generation policy. Industry is the key repository of much of the expertise to support technological innovation, which is critical to advancing the twin goals of economic growth and environmental protection. Companies can act environmentally with no government push. For example, when McDonalds stopped using polystyrene sandwich packages, the decision affected some 40 percent of the polystyrene market. Home Depot has gone to great effort to provide “green” products to its customers and Walmart set up an environmentally designed store in Lawrence, Kan. Such firms play a key role in both satisfying and creating consumer preferences, including consideration of the environment.

Greater flexibility

It is difficult to simultaneously be referee and quarterback. Under the current regulatory scheme government sets the rules, which is necessary and appropriate, but also tries to dictate exactly which plays to use. Now we see that this approach is stifling to innovation, does not account for differences across industries and ecosystems, and creates incentives to try to get around the law.

Another approach would be to continue to use the existing regulatory system as a minimum benchmark but try, at the same time, to increase opportunities at all levels of implementation to improve environmental performance through other than narrowly prescribed regulatory means. In other words the government should still command but it does not need to control exactly how regulated parties should achieve compliance with established goals. The regulated community should be empowered to design its own enforceable alternative compliance methods provided they achieve equivalent or better environmental performance. In this system the government commands what the goals should be, but two parties make a “covenant” concerning how to achieve the goals given the particulars of place, industry, and circumstance.

Such an approach may be costly at first for companies and regulators. But the long-term payoff measured by enhanced competitiveness and better-targeted environmental protection would be great. Another advantage of this approach is that it unleashes rather than inhibits technological innovation. Rigid standards offer incentives to use technology not because it is superior, but because it is most familiar to regulators. How much better it would be to have companies fighting over an environmental protection approach that also affords them a competitive advantage technologically.

Innovation is important for technology and policy. One way to add innovation to the environmental law system would be to extend the “bubble” concept. Imagine placing a bubble over a whole factory, over many enterprises, or over a whole region. Inside the bubble there is an established budget for pollution, but it could be balanced in many different ways as long as the total emissions do not exceed the agreed upon amount. Professor E. Donald Elliott of Yale Law School prescribes a broadening of the concept so that within “multimedia bubbles” environmental management obligations can be traded across different types of pollution. Allowing entities to control pollution more from one process and less from another means a factory, network, or region, by adapting to local conditions, would have the opportunity to achieve the same or better total level of pollution control at far lower costs.

This type of system extends beyond the smokestack industries and can be used to bring in service companies and other sectors as well. Elliott writes in Thinking Ecologically: “A refinery that has already controlled most of the sources of volatile organic compounds (VOCs) within its boundaries that are easy and cheap to control may be able to achieve needed additional reductions more efficiently by paying a local dry cleaner to upgrade its machinery to reduce VOCs, or by redesigning a consumer product to eliminate VOC releases to the environment. The incentive to find innovative opportunities to reduce pollution-primarily from the multiplicity of pollution sources that are presently outside the existing command-and-control system-is one of the most attractive features of expanding the bubble concept.”

The market as a model

We have seen that being flexible and being able to keep pace with change are critical elements of next-generation environmental policy. In many ways, the operations of the market allow more leeway for accomplishing these objectives than the labyrinthine governmental approach. But before we can significantly rely on market-based policies such as fees and taxes, pollution allowance trading systems, or pay-as-you-throw garbage programs, we must be sure that market prices reflect fully the public health and ecological harms and benefits of goods and services. If we “get the prices right,” even those who pay no attention to the environment can be influenced by the invisible green hand of market forces toward environmentally responsible behavior.

Ways to use the interconnected web we call the market are illustrated by the following suggestions for next generation policy:

  • Establish, in agriculture, a negative pollution tax so farmers pay for their pollution but are also rewarded for constructive environmental actions. This would require administrators to establish threshold levels of pollution from nutrients or herbicides, for instance, as determined by monitoring and evaluation. Economist Ford Runge of the University of Minnesota proposes a two-level threshold. One would set the maximum acceptable usage level based on local conditions. A farm that exceeded this level would be penalized. Taxes would decrease until the second threshold level, below which farmers would be rewarded by reduced taxes or even subsidies which could be used to encourage improved technologies such as precision farming or integrated pest management. Eventually, a trading program could be added based on the results determined for the negative tax program.
  • Adopt, in transportation programs, variable highway usage fees in order to mitigate the impact of motor vehicles on air quality, habitats, and other resources. Road use is far from “free” and drivers should be charged according to the impacts of their use. Like telephone calls made during the business day, charges should be higher when highway use is greatest because impacts are greatest as well.
  • Support a “wetlands mitigation banking program” under which those who diminish the amount of wetlands through development must buy credits from the wetlands bank in order to provide resources to expand or enhance wetlands elsewhere in the ecosystem.
  • At the international level, recognize that private capital flows can be the central driver of sustainable development. Although appeals for increased foreign aid to assist with infrastructure projects have largely been overlooked, private investment in developing countries quadrupled between 4990 and 4995. Therefore, governments must learn how to attract and channel foreign investment. Brazil’s national development bank, for example, has implemented a “Green Protocol” that encourages federal public lending to environmentally friendly projects.

Adopting systems approaches

Our structure of environmental law violates the basic principles of ecology, which emphasize the connectedness of natural systems. Furthermore, emissions from one factory are different from those of any other factory and that which harms one river may not be equally harmful to another. The context in which events occur is an important consideration as we lay the groundwork for a more comprehensive, effective, and efficient regulatory structure.

Ecosystem management is a systems approach that looks at the overall structure and behavior of a given area, such as a watershed, a forest, or even a city, analyzes it, and, through “adaptive” management, prescribes programs that can change based on knowledge of specific places and phenomena. The emerging field of industrial ecology, another systems approach, explores technological and natural systems together, viewing environment not as a place removed from the world of human activity, but as intrinsic to industrial decisionmaking. Industrial ecology also highlights the opportunity to look to the natural world for models of efficient use of resources, energy, and wastes. By looking at the flow of products and processes from cradle to grave, it improves our ability to look across problems and to identify emerging issues.

Future enforcement efforts must extend beyond EPA and government to other key actors and sectors.

Inspiring the American people to support careful, thoughtful, and enduring environmental reform in a context where the enemy is hard to see and progress is measured incrementally poses a significant challenge. To some observers, the call for more comprehensive analysis and greater attention to interconnectedness may hark back to the innumerable pleas of the 4960s for such virtues. However, integrated and broad-scale thinking is possible today in ways that were unimaginable a generation ago. Now we have a base of policy practice and experience to build upon. Advances in information technologies make the amassing, assessing, and simultaneous processing of vast quantities of data not just conceivable but ever easier.

At one level, first-generation environmentalism was based on suspicion of human activity that always seemed to cause pollution and threats to human health. The only remedy was centralized command and control. Next-generation polices must rather be built on an ecologicalism that recognizes the inherent interdependence of all life systems. This demands, on the one hand, an expanded view of human impacts on the natural environment going beyond pollution to habitat destruction, loss of biodiversity, and climate change. On the other hand, it requires an appreciation of the connectedness of all life systems, including human advancement. This focus on linkages and on an ecological perspective leads to a more benevolent view of human activities and a belief in sustainable development.

Forum – Fall 1997

Fusion: Pro and con

The two articles in the Summer 1997 Issues on the future of the proposed International Thermonuclear Experimental Reactor (ITER) program–“The ITER Decision and U.S. Fusion R&D,” by Weston M. Stacey, and “Fusion Research with a Future,” by Robert L. Hirsch, Gerald Kulcinski, and Ramy Shanny–reflect the opposing arguments in a debate that we in Congress will have to join, beginning in 1998. Either we put all our eggs in the tokamak basket or we abandon the tokamak design to pursue alternatives. I don’t believe we have to make that stark choice.

I write as a strong supporter of DOE’s restructured fusion program. My bill, the Department of Energy Civilian Research and Development Act of 1997, increased the authorization for the fusion program by $15 million over the president’s request and included the $55 million requested for ITER-related activities in FY 1998. This bill passed the House Science Committee with strong bipartisan support.

Weston M. Stacey is right when he says that the arguments for federal support of fusion research are compelling. It’s also fair to say that the current program drastically cuts back on the vision of this program 10 and even 5 years ago. Although I and others in Congress support ITER, Stacey is unrealistic in advocating the commitment of billions of U.S. dollars for the construction phase of ITER at this time. It has been hard enough to maintain level funding for basic scientific research as we move toward a balanced budget. The fact is that the Clinton administration has consistently shortchanged basic science funding in order to increase funding for marketing development and promotional activities that are more politically attractive.

A massive increase in funding for ITER now would only crowd out other important science programs, such as the scientific user facilities at our national laboratories. For example, I recently had to fight to restore funding for user facilities at the Stanford Linear Accelerator Center, which had been cut in the administration budget request.

The article by Robert L. Hirsch et al. makes the good point that, under the current budget climate, we would be foolish to ignore research on alternative fusion concepts that may lead to a cheaper, more practical use of fusion power. However, it would be wrong to abandon our ITER commitment at this time to pursue alternatives. At the $240-million authorized level for fusion in the House bill, there are adequate funds to pursue alternative concepts. We should also remember, however, that good science is being produced in tokamak experiments such as the DIII-D at General Atomics in San Diego. It would be just as foolish to throw away the fruits of those scientists’ work as it would be to ignore the alternative concepts.

Congress is committed to completing U.S. participation in the ITER design phase. In 1998, the administration, Congress, and, perhaps most important, Europe and Japan, will have to start making the hard decisions on where we go from here and how to pay for it. Until that happens, I believe that a broad-based policy of support for all these elements of the fusion program is the best way for U.S. taxpayers to get the most for their limited funds.

REP. KEN CALVERT

Republican of California

Chairman, House Energy and Environment Subcommittee


Weston M. Stacey’s article is an enthusiastic recapitulation of all the promises of the fusion concept and stresses the value of U.S. participation in the proposed ITER program. The article by Robert L. Hirsch, Gerald Kulcinski, and Ramy Shanny is a more realistic recognition of the uncertainty of fusion technology and a plea for more scientific creativity in developing alternatives to the ITER tokamak concept.

Unfortunately, Stacey’s premises for justifying federal support are factually misleading. (1) The fuel supply is not “virtually unlimited” because the availability of lithium, which is essential in the deuterium-tritium (D-T) fuel cycle, is similar to the availability of uranium-ample now, but finite. (2) The contention that the tokamak concept might eventually compete with advanced nuclear fission and fossil plants is wishful thinking that ignores the reality of the tokamak’s complexity and size, arising from its plasma and engineering requirements, as compared with those of a fission or fossil plant. Today’s estimate by fusion enthusiasts of the capital cost of the ARIES tokamak plant is at least three times that of a nuclear fission plant, and experience suggests that it is likely to be much greater when the real costs of fabricating the complicated magnet, heat-transfer, containment, and maintenance systems are included. (3) Finally, the environmental benignity of fusion is a matter of degree, only slightly better than fission, and neither is as environmentally attractive as solar sources. The radioactive wastes from both need similar custodial attention during the initial century. Fusion does not produce fission products or plutonium, but it does produce tritium, and both are hazardous materials, although plutonium is of more concern in the weapons area.

It is unfortunate that the fusion community has perpetuated the myth that fusion is a foreseeably practical end-game for our energy resources. With the present concept, it certainly is not. It is, of course, a fascinating scientific experiment and should be evaluated and supported in that light. Stacey presents ITER as a test facility and thus a step toward the successful development of fusion. ITER might test some parts of the tokamak concept, but this will not be sufficient for a practical plant design. U.S. participation in such an international facility is a political as well as technical matter.

Hirsch, Kulcinski, and Shanny recognize the uncertainty of tokamak fusion as a national energy source. It is time for the fusion community to acknowledge this reality, so that the public is not further misled and the politicization of this area of science is not continued. The public and Congress have become increasingly cynical about the intellectual integrity of the physics community, and fusion is a case in point. In this regard, the Hirsch, Kulcinski, and Shanny article is a step toward reevaluating the appropriate role of fusion research in our national science programs.

CHAUNCEY STARR

President Emeritus

Electric Power Research Institute (EPRI)

Palo Alto, California


Weston M. Stacey’s article defending the brilliance of and need for the ITER tokamak program can best be put into perspective by noting that its author has been the chairman of the ITER Steering Committee for the past seven years. No matter how unpromising and wasteful the ITER effort, it would be very surprising to find him critical of the program that has provided such a good living for so long!

Robert L. Hirsch, Gerald Kulcinski, and Ramy Shanny’s article is less self-serving . It is certainly correct in its recommendation that ITER be abandoned, but it exhibits a touching naiveté in its closing argument that the present budget should be retained to support a redirected research effort. The naiveté lies in their unstated assumption that such redirected budgets would be spent in intelligent ways; history provides no hope of this, I fear. None of the numerous past studies or workshops run by the U.S. Department of Energy (DOE) Magnetic Fusion Office, ostensibly held to consider new “alternative” or “advanced” concepts for fusion, have ever resulted in new and hopeful directions. All have been used to shore up the fatally flawed big tokamak program by excluding any new ideas for small, quick, and simple fusion that might threaten the big budget base of the main program.

Because preservation of the budget base (not fusion success) is the program rationale, it is clear that a national fusion program can be saved only if the current budget is reduced to zero as swiftly as possible. Then the program can be restarted with wholly new directions (and new management at DOE headquarters and the DOE labs) toward concepts that really do offer small, quick, clean, and cheap fusion power systems-if they work. No other R&D should be allowed. If no such concepts can be identified within the DOE framework, there should be no DOE program in fusion. Rather, the national effort should solicit and support such concepts directly in private industry by using a combination of guarantees of future markets, cost-matching grants, and prizes for defined levels of technical and economic success. This apparently draconian approach simply reflects the fact that the present DOE fusion program management and the lab direction of R&D activities have shown repeatedly that they will not pursue new directions but will fight to continue more of the same big ugly tokamaks.

Once a Tokaturkey, always a Tokaturkey. The Gothic cathedral builders built to the glory of God; these technological cathedral builders build to the god of Mammon (as revealed through research and retirement, using science as pork). But members of Congress are not as stupid as DOE bureaucrats and fusion physicists (and their managers) think them to be, as evidenced by Congress’ continuing reduction of the program budget for the past 17-plus years. It is now at a level less than twice that (in real dollars) at which it started in 1972, when Drs. Hirsch, Alvin Trivelpiece, and Stephen Dean, and I sold its 20-fold escalation to a Congress driven by the Arab oil crisis.

Sic transit gloria mundi. Kill the present program and start over.

ROBERT W. BUSSARD

Energy/Matter Conversion Corp.

Manassas Park, VA

(Brussard was assistant director, development and technology, at the U.S. Atomic Energy Commission’s Controlled Thermonuclear Fusion Program in the early 1970s.)


Weston M. Stacey and Robert L. Hirsch, Gerald Kulcinski, and Ramy Shanny express views for and against the construction of ITER. Stacey likes ITER because it “is . . . a major step toward a safe and inexhaustible energy supply for humanity: practical power from fusion.” On the contrary, say Hirsch et al., “D-T tokamaks, as we understand or envision them today, simply do not afford a workable approach to commercial fusion power.” I believe that both articles have elements of truth, but both have a limited perspective.

Stacey’s arguments for ITER are excellent but do not give proper recognition to the fact that the electricity generation marketplace is even more competitive today than in the past, and is becoming more competitive each day. The fusion community has not come up with a game plan that shows how they can compete in that marketplace with their current tokamak concepts. To expect the world’s governments to ante up $10 billion for ITER, in today’s fiscal and energy market environments, may be asking too much.

Hirsch et al. do not give proper recognition to the very large uncertainty about what the marketplace will actually look like in 20 to 50 years. Today’s market is dominated by cheap, available fossil fuels. But various environmental or political realities could deal a death blow to the use of fossil fuels for electricity generation at a moment’s notice. There are scenarios in which the tokamak doesn’t look so bad: for example, if neither fossil nor nuclear energy sources are socially acceptable. Further, Hirsch et al. offer no concrete alternative to the tokamak, though they urge that such an alternative be aggressively sought.

Over the past year, the U.S. fusion program has been shifting the funding balance in its portfolio to be more along the lines advocated by Hirsch et al., leaving to Japan and Europe the decision about whether ITER is affordable. My view is that if Japan and Europe decide to build ITER, the United States should seek a special appropriation and try to be an equal partner, because there is much good science and technology to be done by ITER, and participation would give the United States high leverage on its investment. Within its domestic budget, however, the United States should more aggressively pursue concept improvements that would allow fusion to be a winner in the U.S. marketplace.

STEPHEN O. DEAN

President

Fusion Power Associates

Gaithersburg, Maryland


Although the articles by Weston M. Stacey and by Robert L. Hirsch et al. present totally different views on the recommended future direction of fusion research, they do agree on two points. Both insist that fusion reactors will be of great future benefit to mankind. And both see this vision as justification for continued and generous funding of fusion R&D by the U.S. government. This logic is hardly new, having been used for decades to justify the fusion research program.

Stacey’s lengthy appeal is often repetitive and exaggerated. He bemoans the fact that funding for the U.S. fusion program has been steadily decreasing, and he wonders why. There are good reasons. A half-century of research effort has mostly revealed that the physics conditions for creating an energy producing plasma are extremely difficult to achieve, and the closer one comes to that goal, the more difficult it is to reach. Most devastating is the fact that it is now recognized that even if the physical conditions are achieved, engineering obstacles prevent the practical application of fusion to commercial electricity generation (Physics Today, March 1997, pp. 15, 101, and 102).

Hirsch, Kulcinski, and Shanny argue for an entirely new approach. Having given up on the use of the D-T reaction, which is certainly the most favorable from both the physics and engineering standpoints, they talk vaguely of “advanced fuel cycles.” All fusion fuel cycles are known and each has its special problems and disadvantages. They imply that imaginative research will discover a new plasma confinement scheme that will finally lead to successful applications. And they suggest that greater hope lies with some other unspecified application based on the use of fusion-produced neutrons, protons, or alpha particles. It is hard to imagine any such application that cannot be served readily today by fission reactors and particle accelerators.

Fusion research should be regarded as a legitimate scientific endeavor and should receive funding appropriate to that objective. But the time has come to stop promoting a massive R&D program with the objective of providing a “limitless benign power source,” which fusion cannot offer, or doing so through some other application that remains unknown.

WILLIAM E. PARKINS

Former Director

Research and Technology, Energy Systems Group

Rockwell International


The two articles by Weston M. Stacey and Robert L. Hirsch, Gerald Kulcinski, and Ramy Shanny provide an informative overview of the serious concerns about fusion R&D as the momentous decision approaches on whether to proceed with construction of ITER. Stacey is an avid supporter of ITER; Hirsch and his colleagues believe it is a waste of time and money. In our view, Hirsch, Kulcinski, and Shanny’s article is the more reasonable of the two.

It is certainly true that DOE’s fusion R&D program has become narrowly and inappropriately fixated on tokamak reactors. From what is known to date, tokamaks are extremely expensive, scientifically unproved, technologically challenging, and would generate significant amounts of radioactive waste. After 40 years and $14 billion of taxpayer-funded research, DOE has no idea when or if commercial fusion power will be available.

Moreover, some scientists believe that the problems related to tokamak technology are virtually insurmountable. William Dorland and Michael Kotschenreuther of the Institute of Fusion Studies at Austin have developed a physics-based model that suggests that plasma turbulence will prevent ignition and the sustainable reaction needed to create fusion power. Indeed, DOE’s Fusion Energy Science Committee released an assessment in April 1997 acknowledging that the difficulty of confining plasma may prevent ITER from achieving its design goals.

Hirsch, Kulcinski, and Shanny are correct-the United States should not allocate any additional money for ITER. The project is losing support throughout Europe; and Japan, the only country interested in providing a site for ITER, is in a severe budget crisis that is forcing a delay in any large scientific project for the next three years. In addition, because the United States and other international partners are not willing to contribute sufficient resources to build the estimated $10-billion facility, Japan would have to provide the majority of the funding; an unlikely prospect.

The large amount of R&D money spent on magnetic fusion, primarily related to tokamaks, competes with funding for renewable energy resources that are more cost-effective and have a much greater chance of providing energy in the near term. Last year, Congress appropriated $232 million for magnetic fusion (primarily tokamak-oriented) and $240 million for initial confinement fusion for weapons stockpile stewardship activities; a total of $472 million for FY 1997. In contrast, the entire renewable energy budget (including solar, wind, hydrogen, geothermal, and biomass) for FY 1997 was $266 million.

The excessive funding for tokamak-based fusion is disproportionately high in comparison to the numerous and diverse renewable sources available and creates competition between the two programs for scarce federal dollars within the energy R&D budget. Magnetic fusion should be funded as a basic science program, not as energy-supply R&D. And Hirsch, Kulcinski, and Shanny are mistaken in their belief that fusion research, even if oriented toward alternative concepts and fuels, requires more than $200 million a year.

DOE should phase out its tokamak reactors and fund a modest alternative program oriented toward basic science research. It should abandon the ITER project to those countries, if any, that are willing to pay an exorbitant cost for a high-stakes gamble that may never pay off. The United States and its international partners should increase their commitment to sustainable energy resources, which can provide a greater proportion of the world’s energy needs. Lawrence Lidsky of the Massachusetts Institute of Technology, a former fusion researcher, expressed our conviction when he noted that “It is hard to make an economically based argument for fusion. You can’t justify it, especially as other sources of energy look better and better. The only fusion reactor we need is already working marvelously-it’s conveniently located a comfortable ninety-three million miles away.”

JAMES ADAMS

Safe Energy Communication Council

Washington, D.C.


Investing in R&D

Let me begin by saying that I have the highest respect for Congressman George E. Brown, Jr. Over the years, he has demonstrated a tireless commitment to federal science and technology (S&T) activities, and those of us in Congress who care deeply about science and research owe him a debt of gratitude.

I disagree, however, with Brown’s views about the budget, taxes, and the economy as they relate to S&T (expressed in “An Investment Budget,” Issues, Summer 1997). Congress and the president recently reached agreement on a historic plan to balance the federal budget, eliminate the deficit, and provide much-needed and deserved tax relief to working Americans. In his article, Brown criticizes both the emphasis on eliminating the deficit and the role of tax cuts in this process. Most economists agree, however, that economic growth is stymied by deficits, in part because of higher interest rates associated with those deficits as well as with the burgeoning national debt. The United States currently spends hundreds of billions of dollars per year servicing the interest on the national debt. This is money that could be spent on research, education, and other discretionary programs. With respect to tax relief, aside from the obvious argument that Americans should be allowed to keep as much of their income from sliding into a black hole in Washington as possible, many economists believe that tax cuts stimulate investment, and investment stimulates job creation and economic growth.

As to Brown’s investment budget, there are some aspects with which I agree and some I cannot support. The investment budget pays for increases in spending with increased taxes and reductions in programs such as drug interdiction. At least one-half of the tax increases called for would fall on small businesses, private investment income, state and local governments, and companies exporting products made by U.S. workers. Additional taxes on small businesses and those that export goods will slow or perhaps halt economic growth, altering many of the economic assumptions on which Brown relies for his budget. In addition, at a time when drug use is increasing among our young people, I believe it would be very unwise to reduce spending on interdiction activities.

With the enactment of the bipartisan budget agreement, we are on a course toward deficit elimination, one that will better ensure long-term economic growth. Now we must decide, within the financial boundaries established by the budget agreement, where our priorities lie and define which programs must be funded by government and which should be funded by industry, through public/private consortia or international partnerships. Federal policymakers should work with the scientific community, universities, industry, and the states to define where we need to be in 25 to 30 years and from that determine each entity’s role in a specific program or priority. As part of this process, it would be useful to unify R&D policy through implementation of the Government Performance and Results Act. From this, we can better determine whether industry needs to play a more active role in funding research, which goals should be pursued through partnerships, and what kinds of nonbudget incentives, such as tax credits and regulatory changes, are needed to spur investment in specific areas.

I believe this to be a better approach to guaranteeing a more robust S&T enterprise and a thriving economy well into the next millennium.

REP. STEVEN H. SCHIFF

Republican of New Mexico

Chairman, Subcommittee on Basic Research of the House Science Committee


Representative George E. Brown, Jr.’s investment budget is a far better plan for justifying reasonable growth in federal R&D support than other plans introduced in the current legislative session. A number of the new measures do not address the critical question: Where are the proposed increases in R&D going to come from? The Brown budget proposal has been certified by the Congressional Budget Office (CBO) as fitting within the FY 1998-2002 balanced budget plan. It would focus the debate about future R&D budget levels as it should be focused: on how R&D, along with infrastructure and education, can contribute to intellectual and economic growth and the well-being of the people over the long term. The Office of Management and Budget (OMB), the General Accounting Office, and CBO have all advocated moving to investment budgeting, and they generally agree on R&D, infrastructure, and education and training as investments (in contrast to current expenses). Brown’s figures are similar to OMB’s own illustrative investment budget numbers.

Changing to an investment budget would be a major step for the government and the affected parties, and it is an idea that deserves attention now more than ever. Historically, R&D expenditures have risen and fallen in close concert with the rise and fall of discretionary expenditures. The most important point Brown makes is that the juggernaut of rising entitlement and decreasing discretionary expenditures inevitably forces R&D (and other vital investments) on to short rations into the foreseeable future, especially after the baby boomers swell the entitlement rolls. Given the public aversion to new taxes, and the political aversion to real entitlement reform, R&D and other investment programs that are needed to ensure a strong future will be hostage to current consumption and the ever-growing entitlement elements of the budget. If an investment budget would truly move the nation beyond its current and undesirable underinvestment pattern in education, infrastructure, and R&D, it is worthy of adoption.

Pooling physical capital expenditures, R&D, and education and training in an investment category makes conceptual sense, but there are practical problems that must be debated and resolved. An investment budget, as defined by Brown and OMB, would be nearly half of the present discretionary budget. Growing these accounts would greatly increase pressures for reductions in other government services within the discretionary budget, including essential services such as weather forecasting, food inspection, and statistical services provided by several government departments. Lumping stakeholders such as scientists and engineers and their institutions, highway and aviation advocates, and general education together would create unaccustomed bedfellows. Would they pull together politically for an overall investment budget? Trade-offs and infighting could result.

One might want to be more selective about what constitutes investment. For example, 40 percent of total federal R&D funding supports production engineering, testing and evaluation, and upgrading of existing weapons systems-activities that are not investments in new knowledge and new technologies.

In the end, however, a change that breaks the present pattern of underinvestment in the nation’s future must be made. This change will take time: another political year or two (or more), as Brown recognizes. In democracies, things often get worse before they get better. Meanwhile, ways to move forward through the current stagnation in R&D budgets have been advanced, several in Issues. They include the Academies’ federal S&T budget, Lewis Branscomb’s proposal for a rapprochement in technology policy by forging a political consensus around federal investment in fundamental S&T, and our own R&D portfolio proposal.

MICHAEL MCGEARY

PHILIP M. SMITH

Washington, DC


Japan: A new relationship

There seems to be a resurgence of interest by Americans in Japan’s status as an R&D performer and in its place in the technological world. During a hiatus of a few years caused by the decline in the Japanese economy, fears among Americans about foreign competition in high-tech markets were redirected toward other nations, with China being mentioned most often. The economic decline had a direct impact on industrial R&D spending in Japan, with investment levels dropping for the first time in several decades. This led to even more complacency among Japan watchers. Now industrial R&D in Japan is turning upward again and is being augmented by large increases in government research spending.

In a significant confluence of events, the important article by George R. Heaton, Jr. (“Engaging an Independent Japan,” Issues, Summer 1997) appeared almost simultaneously with an in-depth report by the National Research Council (NRC) (Maximizing U.S. Interests in Science and Technology Relations with Japan), and both publications coincided with the Fifth International Conference on Japanese Information in Science, Technology, and Commerce, held in Washington, D.C., at which several papers addressed the relevance of Japanese science and technology (S&T) and how to facilitate the transfer of technical information from Japan.

Heaton discusses how the United States should comport itself in dealing with a Japan that has reached technological parity with the world. He points out that the United States can gain substantially from Japan’s technological prowess if it can change past ways of interacting with Japan, first by looking on Japan as an equal partner and then by relying more on cooperation at the level of the individual scientist or engineer rather than on agreements reached at high political levels.

The NRC report, while seeking the same objective of increased cooperation, claims that serious asymmetries between the United States and Japan in market access, personnel exchanges, licensing of patents and know-how, and other market forces have inhibited cooperation in S&T, and that government actions are required to overcome these trade barriers before full cooperation can be achieved. Both Heaton and the NRC report point out that previous U.S. government failings in enlisting R&D cooperation with Japan have led to deterioration in the bilateral technical relationship.

At the international conference, R. D. Shelton and Geoffrey M. Holdridge of the International Technology Research Institute at Loyola College in Maryland concluded that the economic recession in Japan did little to slow Japan’s progress in R&D or in expanding its markets for high-tech products abroad, and that the state of Japanese industrial S&T is at an all-time high and is improving. Their conclusions are based in part on a long high-quality series of evaluations of Japanese technology sponsored by the National Science Foundation and other government agencies, so at least some good has come from government intervention in the process of understanding and exploiting the Japanese R&D scene.

What will undoubtedly not be the last word on this subject has been written by Admiral (Ret.) James D. Watkins [Science, Vol. 277, 1 August 1997, p. 650], who places the blame for U.S. ineptitude in international S&T cooperation on the White House and the Department of State, where he found little understanding of or interest in working with other nations while he was Secretary of Energy (1989-1993). Unfortunately, there is plenty of blame to pass around. The Department of Energy, some other federal agencies, the Congress, and parts of industry warrant a share also. It would be nice to see one or more of these influential groups take the lead in setting the United States on a new course of enlightenment in international R&D cooperation.

JUSTIN L. BLOOM

President

Technology International, Inc.

Potomac, Maryland


In the past, the success of the Japanese economy rested on the adoption and often improvement of foreign technology. Most of the S&T effort was devoted to the assimilation and diffusion of foreign technology. This “pursuer” mode was so successful that Japan could use S&T resources more efficiently than any other country. However, as Japanese industries became stronger and more independent of the government and as the presence of the international arena grew, Japan began to move in a new direction in S&T around the beginning of the 1980s. Japan wished to shed its pursuer mode and shift to a pioneer mode.

Government policies and organizations were required to adapt to the changing economic and social environment. The goal was to become a center of excellence in the world scientific community and to make a significant international contribution. The government has been making efforts to reorganize S&T systems and renovate institutional frameworks. In spite of continuing budgetary deficits, the government has managed to increase S&T resources and secure allocations for more fundamental research. Internationally, the emphasis has been placed on efforts to open up national research and promote international cooperative projects. The pioneer mode has been successful in part, but it is not full-fledged because inertia from the past has been an obstacle.

The new Basic Law for Science and Technology aims to increase research funding and change the institutional frameworks of S&T systems. A systematic implementation plan encourages this aim and reviews and evaluates S&T policies. Changing the national S&T system depends on the balance of two elements: the above-mentioned national efforts and interaction with other countries. Globalization requires the national S&T system to be more interdependent and cooperative in the international arena. Moreover, recent research projects are clearly more expensive, whereas the life cycles of technologies have shortened. Collaborative activities with foreign research communities help both parties to pool and utilize collective resources more systematically. And pooling mutually unfamiliar practices and approaches may create opportunities to develop S&T in challenging directions. At the same time, these cooperations give Japan the stimuli to excel among scientific communities around the world.

MASAMI TANAKA

Secretary General

Japan Industrial Standard Committee

Agency of Industrial Science and Technology


Bio invaders

Don C. Schmitz and Daniel Simberloff have done a fine job of summarizing the problems caused by biological invasions and suggesting solutions (“Biological Invasions: A Growing Threat,” Issues, Summer 1997). They bring to their analysis not just familiarity with current research but also firsthand experience in day-to-day pest management. We ignore their insights at our peril.

The kind of overview they provide requires reaching across the usual disciplinary boundaries. Yet there is a broader synthesis still to be done, which is one of the reasons why better national leadership is needed so urgently. So far, discussions of the effects of alien species have not fully incorporated threats to human health or the impacts of similar emerging diseases found among wildlife. At least 1465 Americans fell ill in 1996 from a Cyclospora parasite first identified in New Guinea in 1977. Likewise, reptile-associated Salmonella outbreaks are on the rise, prompting the Centers for Disease Control to issue cautions in 1995 about handling reptiles, many of which are exotic imports. The National Wildlife Health Research Center has documented a number of outbreaks of waterfowl diseases since 1975, each killing 25,000 to 100,000 birds. Some were caused by diseases not native to the United States; others by microorganisms spreading beyond their usual U.S. distribution.

With such failures to control the movement of diseases and pests even before trade became “free,” the growth in global trade is worrisome. The first case directly related to a formal assessment of pest risks is now pending before the World Trade Organization (WTO). Australia bans certain salmon imports, it says, to protect native fish from nonindigenous diseases. The Australian government finished a risk assessment in December of 1996 detailing the situation. The United States complained about protectionism to the WTO. The WTO panel’s deliberations will be the first indication of how the need for science-based risk assessments regarding biological invasions will be interpreted. The case bears watching

Schmitz and Simberloff accurately cite the Office of Technology Assessment’s (OTA’s) estimate that biological invasions cost the United States hundreds of millions, if not billions, of dollars annually. We at OTA suspected that even these figures were low, and subsequent research has borne that out. But there are other significant costs that remain uncounted. One of the most insidious costs of biological invations is that they make one locale much like any other and rob us of our sense of place. A McDonald’s on every corner; Queen Anne’s lace along every roadside. Without major efforts at education, few of us will be able to tell what is ecologically real and what is not.

PHYLLIS N. WINDLE

President


Florida is fortunate in having a mild climate, many types of soils, and a noteworthy diversity of fauna and flora. Our commercial agricultural and horticultural prosperity is great. Unfortunately, these very factors are extremely favorable to exotic pest and disease introduction and establishment. A recent analysis of numbers of arthropods establishing in Florida from 1987 to the present gives a dismal picture. In 1993, a record number of 15 exotic pest arthropods was detected to be established in Florida. Up to July of 1997, we discovered 14 new pest arthropods. This bodes no good for our agricultural and native plant heritage.

Over the past several decades, exotic plant pest and disease introduction has been an increasing problem in Florida, with over $160 million being spent on exotic plant pest and disease eradication since 1970. We are likely to spend over $20 million in eradication programs for the Mediterranean fruit fly alone and $7 million for Asian-strain citrus canker. Recent introductions of exotic pests that affect endangered native plants include a moth, Cactoblastis cactorum, that is damaging an endangered cactus species in the Florida Keys; and a weevil, Metamasius callizona, that attacks many of our bromeliads. Noxious weeds such as tropical soda apple and cogon grass are very invasive and displace native plants in their habitat, and the recent introduction of the tomato yellow-leaf curl geminivirus will have an as-yet-unknown impact on Florida’s tomato industry.

Plant pests, diseases, and noxious weeds spread to new areas through the movement of plants and plant products, primarily through the movements of cargo and the traveling public, who often carry illegal produce and other agricultural products in their baggage. Florida is a mecca for international trade and tourism, with 14 deep water ports and eight international airports. The U. S Department of Agriculture (USDA) reports that over 10,000 interceptions of agricultural pests of economic significance occur annually at Florida ports. This is alarming because only two percent of all incoming foreign cargo and passengers are actually inspected by USDA. Also, there has been a surge of agricultural imports for many reasons, including the North American Free Trade Agreement and the General Agreement on Tariffs and Trade.

The recent trend of increasing numbers of invasive pest organisms constitutes a crisis threatening the natural and agricultural interests of the state of Florida. It is extremely important that all parties with an interest in preserving our biotic heritage institute changes that will stem the tide of invasive hordes of pestiferous organisms. This can only be achieved on the bedrock of significant investment of resources in preventing, detecting, and eradicating exotic pests.

BOB CRAWFORD

Commissioner of Agriculture

Florida Department of Agriculture & Consumer Services


Better skills, better business

Kenan Jarboe and Joel Yudken’s article is a clarion call for public involvement in the evolution and dissemination of “high-performance work systems.” In general, we agree with the authors. However, at the risk of becoming the skunk at the company picnic, we think it is necessary to temper their enthusiasm in two respects.

First, we think it will be difficult to create general standards and incentives to promote modern work practices across the broad array of U.S. work institutions. As the authors point out, high-performance practices tend to come in idiosyncratic bundles that differ from company to company. Particular bundles of high-performance practices always work somewhere and sometime, but never everywhere or all the time. High-performance practices are organic and have to be home-grown one company at a time.

Second, the authors’ enthusiastic endorsement ignores the dark side of high-performance systems. At their core, these systems reduce risk and costs by combining flexible technology and flexible work systems in order to create flexible production and service networks made up of suppliers, contractors, and partners. But for many U.S. workers, flexible has become a fancy word for fired or reduced wages and benefits. High-performance systems tend to shift economic risk from institutions to individuals and from large to small institutions.

This dark side creates dilemmas for public policy. Understandably, the government supports such systems in order to encourage flexibility in response to the realities of global economic change. At the same time, the insecurity that results from our flexible new institutions suggests the need for a corresponding system of flexible and portable health care, pensions, training, and day care. Moreover, in a world where job security has gone out with gold watches at retirement, employees have more of a stake in economic change and a greater need for a voice in economic decisions. Public promotion of high-performance systems would have to combine the efficiency needs of employers with workers’ needs for economic security and a voice.

Public officials tend to overlook the detrimental effects that result from divergent employer and worker needs. This “happy workers and happy workplaces” view of high-performance work systems ignores the inherent conflict and necessary tradeoffs between employers and employees, and reduces the credibility of government policies among employers and workers. In addition, because high-performance systems and the economic returns from them are firm-specific, it is not clear that the government can or should support them in general.

Public involvement in high-performance systems is inherently difficult, but we agree with the authors that it is worth trying. One way around the problems might be to work through intermediaries such as industry and trade associations, unions, and relevant educational organizations. The old rules still apply: The government should take on functions when private markets fail and public benefits justify investment, as is currently the case in education, health care, pensions, and R&D. Improvements in each of these areas where the government is already engaged would go a long way toward making the workplaces we have the high-performance workplaces we need.

ANTHONY P. CARNEVALE

DONNA M. DESROCHERS

Educational Testing Service

Princeton, New Jersey


The Council on Competitiveness is deeply involved in exploring ways to better prepare U.S. workers. In our recent membership survey, leaders from industry, labor, and academia told us that increasing the numbers of skilled workers is the United States’ most serious competitiveness hurdle in the next decade. Education and training are essential not only to increase productivity but to boost wages and improve the standard of living for all Americans.

This assessment is underscored by our current field work. For the past year, the council has taken a hands-on approach to determine how corporations of all sizes, workers, and schools (from vocational training institutions to four-year colleges) are responding to the national skills shortage. With the help of a task force made up of experts and practitioners, the council has conducted scores of meetings and interviews to explore some of the most pressing workforce issues confronting a wide variety of firms.. These issues include incumbent workers and continuous learning, basic skills training for entry-level workers, new learning technologies, and training challenges facing small and mid-sized businesses.

The issues are complex, but the underlying problem is simple: The demand for increased skills is rising a lot faster than the capacity of companies, workers, or the nation’s educational system to respond. Job requirements and skills are no longer static; employers in all industries are urgently looking for workers who can adapt quickly to new tasks and new market demands. Many companies are seeing vast portions of their existing skilled workforce retire, and they are scrambling to fill the void. In response, educators and trainers are frantically working to shorten and sharpen the learning cycle.

In this competitive environment, workers and employers are beginning to work together to improve productivity and customer satisfaction. As aptly pointed Kenan Patrick Jarboe and Joel Yudken, worker involvement is absolutely key to instituting high-performance work systems. Our findings show that this move toward worker empowerment leads to new demands for education and training. If we expect front-line workers to undertake new responsibilities, we must give them additional opportunities to learn on the job and increase their skills and knowledge base.

In fact, more and more companies are giving employees a say in designing learning systems. When management and workers collaborate to define the skill sets that will be needed down the road, the result is a better training system and workers who see the value of learning. Employees respond to training demands when they are convinced that they are partners in the process.

Employer and employee must share the responsibility for learning. The most inspired corporate leadership creates the conditions to help all workers meet their objectives. And employees with the best prospects quickly engage in the learning process and take responsibility for their own education and training. Only when both parties have a stake in the results can companies bank on improved performance and employees acquire the necessary skills to advance in the labor market.

GRETCHEN RHINES

Council on Competitiveness

Washington, D.C.


Pregnancy planning and Prevention

In “Missing the Boat on Pregnancy Prevention” (Issues, Summer 1997), Carol J. Rowland Hogue provides a useful summary of the major issues attending unintended pregnancies. Some aspects of the problem merit further discussion.

Despite enormous public attention and disapprobation and the expenditure of untold millions of dollars, little progress has been made in reducing rates of teenage pregnancy. Few programs have proven effective. To intervene successfully, it is essential to recognize that young adolescents 12 to 17 years old have very different experiences, cognitive skills, and motivations than older adolescents. Many pregnancies of younger teenagers are intentional. Sex and parenthood have different meanings at different stages of development across the life span. Interventions to influence sexual behavior need to be developmentally sensitive and appropriate.

Reasoned and informative discussions about sexuality and sexual behavior are nearly absent from public discourse. They are, tragically, also infrequent within families, where many believe they rightly belong. Part of the reluctance to discuss sex, like other emotional topics, is the superstition that talking about something will increase the probability that it will happen. That myth has been used to keep sex education out of the public schools for decades. Two other circumstances prevent discussions that could reduce unintended pregnancies. First, many adults are uncomfortable talking about sexual topics. Often families have not established climates in which families’ values are explicit, yet communication, including disagreement, is open and encouraged. Such patterns of communication must be established well before such touchy topics as teenagers’ sexual behavior can be fully discussed. Second, many parents are not knowledgeable about sexual development and contraception. Uninformed parents sense that they cannot be effective teachers and so avoid the topics. As Hogue points out, the public needs to be educated, and the process should include if not start with adults.

Child advocacy groups have long called for every child to be wanted. A corollary is the need for preparation for pregnancy, and thus for family planning. The principal way to ensure that a baby will be born healthy and develop optimally is for women to prepare themselves physically, nutritionally, and, I suspect, psychologically before they conceive. Critical stages of fetal development, such as formation of the major organs (including the brain), occur before most women are sure they are pregnant.

Having a physically healthy infant is an important start to family life, but successful child-rearing takes more. Being an effective parent is perhaps the most important and difficult job individuals take on, yet it gets much less public scrutiny and validation than many other social roles. Somewhere in our society’s deeply held belief in the sanctity of the family is the ill-founded assumption that the ability to procreate is equivalent to the ability to successfully raise a child. That assumption is a formidable barrier to encouraging and assisting men and women not only to plan their pregnancies but to plan for managing the responsibilities that accompany having a child. As a society, we must heighten our awareness of the obligations that accompany parenthood-the obligations of parents to their children and our obligations to support them in that role.

EDWARD L. SCHOR

Medical Director

Iowa Division of Family and Community Health


Carol J. Rowland Hogue makes a very persuasive case regarding the problems stemming from unintended pregnancies among adults. This is clearly a major national problem, and her solutions, including increased access to contraception, are important. But it seems impossible that she could discuss unintended pregnancy without discussing the impact of our current welfare system.

Ever since Charles Murray raised it in Losing Ground, perhaps no issue has been as hotly debated as the link between out-of-wedlock births and welfare. However, the overwhelming weight of evidence now appears to show a clear correlation between the availability of welfare benefits and the growth in out-of-wedlock births. There have now been 16 major studies of this link, with 13 finding a statistically significant correlation.

Of course, women do not get pregnant just to get welfare benefits. It is also true that a wide array of other social factors has contributed to the increase in out-of-wedlock births. But by removing the economic consequences of such births, welfare has removed a major incentive to avoid them. Until individuals, particularly those living in relative poverty, can be made to see the very real consequences of unintended pregnancies, it will be impossible to gain control over the problem of out-of-wedlock births. By disguising those consequences, welfare makes it easier for women to make the decisions that will ultimately lead to unwed motherhood. As Murray has explained, “The evil of the modern welfare state is not that it bribes women to have babies-wanting to have babies is natural-but that it enables women to bear children without the natural social restraints.”

Any attempt to address the problem of unintended pregnancy must also address the incentives of the modern welfare state. Only through the elimination of welfare subsidies for out-of-wedlock births can we hope to begin to instill the values that are required to significantly reduce unintended pregnancies. Without welfare reform, all of Hogue’s proposals would be in vain.

MICHAEL D. TANNER

Director of Health and Welfare Studies

Cato Institute

Washington, D.C.


Making emergency contraception more widely available is one of the most important steps we can take to reduce the unacceptably large number of unintended pregnancies and the consequent need for abortion in the United States. Unfortunately, most women do not know that ordinary birth control pills containing the hormones estrogen and progestin can be used to prevent pregnancy up to 72 hours after unprotected sexual intercourse.

Emergency contraceptives available in the United States include emergency contraceptive pills (ECPs), minipills, and the copper-T intrauterine device (IUD). None can be obtained without a prescription and none is marketed as an emergency contraceptive. Even though some doctors have been prescribing emergency contraceptives since the 1970s, no company has applied to the Food and Drug Administration (FDA) to market birth control pills or IUDs for emergency contraception. Although considerable international research attests to the safety and efficacy of emergency contraceptives, manufacturers cannot market or advertise these products for postcoital use until they seek and gain formal FDA approval for this specific purpose. Without commercial promotion, it is not surprising that physicians prescribe emergency contraceptives infrequently and fail to provide information about emergency contraception to women during routine visits; as a consequence, very few women know that emergency contraception is available, effective, and safe.

Half of all pregnancies in the United States are unintended: 3.2 million in 1994 alone, the last year for which data are available. Unintended pregnancy is a major public health problem that affects not only the individuals directly involved but also the wider society.5 Insurers in both the public and private sectors generally cover the medical costs of unintended pregnancy, with coverage for abortion showing the most variation. Public payers generally provide broader contraceptive coverage than private payers, although payment levels often are low, perhaps low enough to limit access. Extending explicit coverage to emergency contraception would result in cost savings by reducing the incidence of unintended pregnancy.

Several innovations in service delivery would also enhance the potential for emergency contraception to significantly reduce the number of unintended pregnancies. Perhaps the greatest impact would result from changing provider practices so that women seen by primary and reproductive health care clinicians would be routinely informed about emergency contraception before the need arises; the recent clinical practice pattern issued by the American College of Obstetricians and Gynecologists should further this goal. Information could be provided during counseling or by brochures, audio or video cassettes, or wallet cards. A more proactive step would be to prescribe or dispense emergency contraceptive pills in advance so the therapy would be immediately accessible if the need arises. Availability would also be enhanced if manufacturers sought FDA approval for and then actively promoted emergency contraceptives; the recent FDA notice in the Federal Register declaring ECPs to be safe and effective will make gaining approval far easier in addition to giving explicit official sanction to ECP use. Until clinicians, manufacturers, or insurers make these changes, the only way to improve access is to inform women directly about the availability of emergency contraception so that they themselves can demand better clinical care.

To help educate women about this important option, the Reproductive Health Technologies Project (RHTP) in Washington and the Office of Population Research at Princeton University sponsor the Emergency Contraception Hotline (1-888-NOT-2-LATE) and the Emergency Contraception Web site ). Since it was launched on February 14, 1996, the hotline has received more than 58,000 calls. More detailed information is available on the Emergency Contraception Web site, which has received more than 135,000 hits since it was launched in October 1994. RHTP has received funding from several foundations to work with the Elgin DDB agency to develop public service announcements for print, radio, television, and outdoor venues. This summer, a public education campaign was launched in four test cities (Chicago, Los Angeles, San Diego, and Seattle) in partnership with a coalition of local organizations and clinicians in each area.

JAMES TRUSSELL

Director

Office of Population Research

Princeton University

From the Hill – Fall 1997

House, Senate endorse big increases in FY 1990 R&D budgets

In the wake of the balanced budget agreement, this summer both the House and Senate endorsed big increases in the FY 1990 budgets of federal R&D agencies. However, no final decisions on appropriations had been made as Issues went to press in early September, and there were some major differences between various House and Senate appropriations bills that needed to be reconciled.

The terms of the balanced budget legislation, signed by President Clinton on August 5, along with a growing economy that is boosting government revenues, have provided at least a temporary boon for discretionary spending programs. This is in marked contrast to FY 1996 and 1997, when funding for many programs was cut. Appropriators have responded by singling out key R&D programs for increased funding, thus reaffirming the importance of science and technology for the nation.

The big R&D spending increases, however, are not likely to last beyond FY 1999. Appropriators will have to make sharp cuts in discretionary spending beginning in FY 2000 in order to balance the budget by 2002 as planned. In addition, even if this year’s proposed increases are enacted, the budgets of R&D agencies will still be below their FY 1994 levels in inflation-adjusted terms because of steep cuts made during the past three years.

The big picture

The House would provide $75.3 billion for federal R&D in FY 1990; the Senate, $75.4 billion, representing increases of 2.0 and 2.9 percent, respectively, from the FY 1997 budget of $73.3 billion. However, these increases are expected to only barely exceed projected inflation during the coming year.

Nondefense R&D would climb to $34.9 billion (up 4.4 percent) in the House plan and $35 billion in the Senate plan (up 4.6 percent) because of large increases for most of the civilian agencies. Although this would be slightly below the FY 1994 level in inflation-adjusted terms, it would still begin to reverse the cuts of the past three years.

Defense R&D, which includes Department of Defense (DOD) R&D and the defense activities of the Department of Energy (DOE), would rise only 1.4 percent under the House bills and 1.3 percent under the Senate bills because of cuts in defense development. Research would fare better than development, with applied research increasing 5.3 percent in the House and 3.6 percent in the Senate. The notable exception is basic research, which would be cut 4.4 percent in the House bill.

Spending on basic research overall in FY 1990 would increase by 3.4 percent under the House bill and 4.9 percent under the Senate plan. Both would represent an all-time high for federal investment in basic research in inflation-adjusted terms.

Nearly every major R&D agency would receive an increase well above the rate of inflation, and key research accounts would be funded at levels much higher than the current-year levels or the president’s request. Here is the proposed R&D funding breakdown by agency:

National Science Foundation (NSF): The House approved an 0.6 percent increase, to $2.6 billion, for NSF’s R&D programs. Including NSF’s non-R&D funds, the total NSF budget would be $3.5 billion, which is $217 million or 6.6 percent more than in FY 1997. The House would provide $115 million to fully fund the renovation of the South Pole Station and other research facilities in Antarctica. The Senate approved a slightly smaller increase, mainly because it allocated less money for the South Pole activities.

National Institutes of Health (NIH): A draft Senate bill would add nearly $1 billion to the NIH budget in FY 1990, for a total of $13.7 billion (up 7.5 percent). A draft House bill would provide $13.5 billion, which is 6 percent or $765 million more than the current year. Both amounts would be significantly above the president’s request of $13.1 billion. Every institute would receive at least 6 percent more than this year in the Senate bill and at least 4 percent more in the House bill.

National Aeronautics and Space Administration (NASA): The House approved a 4.7 percent increase to $9.0 billion for NASA’s R&D activities, which is well above the president’s request. NASA’s total budget would decline slightly to $13.6 billion. The House would fully fund the Space Station and would provide $100 million if Russia fails to make its promised contribution to the project. The Senate approved the president’s requested budget of $13.5 billion, which includes a 3.1 percent boost to the agency’s R&D activities.

Environmental Protection Agency (EPA): The House would provide $610 million for EPA’s R&D activities, a 12.7 percent increase. The Senate approved a smaller but still significant 6.6 percent increase. Both chambers would boost EPA’s research effort in particulate matter and other airborne hazards in order to improve the scientific foundation for EPA’s regulatory activities.

Department of Energy (DOE): The Senate endorsed an 0.7 percent increase, to $3 billion, for DOE’s defense R&D, including $190 million for the National Ignition Facility. The Senate would also provide $240 million for magnetic fusion, $15 million more than the request, as part of a 2.6 percent increase for energy supply programs. The Senate would allocate $6.3 billion for R&D, 4 percent more than this year. The House, however, would cut DOE’s R&D by 0.7 percent to $6.1 billion by holding defense-related R&D steady and cutting energy-related R&D. Both houses would support the Large Hadron Collider project and other physics research.

Department of Defense: The Senate approved $37.4 billion for DOD’s R&D, including an 0.7 percent increase for basic research and a 3.6 percent increase for applied research. The House, however, would cut DOD’s basic research by 4.4 percent within a $37.6-billion R&D budget. Both houses would add significantly to DOD’s growing effort in medical research.

Department of Commerce: The Senate approved a 9.0 percent increase in Commerce R&D this week, to $1.1 billion. The Senate strongly endorsed the National Oceanic and Atmospheric Administration’s (NOAA’s) R&D on oceans, atmosphere, and marine resources, and would provide a 12.7 percent increase, far above the president’s request, to $634 million. The Senate would boost R&D at the National Institute for Standards and Technology (NIST) by 5.5 percent to $604 million. NIST’s Advanced Technology program would receive $211 million, $14 million less than this year. The House would increase Commerce R&D by 9.4 percent but would do so by cutting NOAA’s R&D and adding $110 million for construction of NIST’s R&D facilities.

Department of the Interior: The House appropriation for Interior contains $603 million for R&D in FY 1990, which is 4.2 percent more than the current year, with larger increases for natural resources research and National Park Service research programs. The Senate has proposed $610 million.

Department of Agriculture: The House and Senate would trim USDA’s R&D budget by 2.7 percent and 1.9 percent, respectively, because of cuts in earmarked R&D facilities projects. This would allow support for USDA’s basic and applied research to increase at least at the rate of inflation.

Department of Transportation (DOT): The House and Senate both approved a nearly $43 billion FY 1990 budget for DOT, a nearly 10 percent boost over the current year. Highway programs, air safety, Amtrak, transit grants, and the Coast Guard would all receive large increases, but the House would cut DOT’s R&D by 6.1 percent to $610 million, whereas the Senate would provide $654 million, which is only slightly above the FY 1997 funding level.

Department of Veterans Affairs: The House approved $302 million, which is 11.4 percent more than the current year. The Senate would allocate $276 million, slightly more than this year’s $271 million.

R&D funding updates, complete with detailed tables, can be found on the World Wide Web at: in the “FY 1990 R&D” section.

Climate-change conference evokes concern in Congress

With an important international conference on climate change set for December 1997 in Kyoto, Japan, Congress is turning its attention to the issue amid concerns that any agreement signed at the meeting might require the United States to implement costly new environmental and other regulatory actions.

The scientific uncertainty associated with global climate change has aroused much consternation on the Hill. On July 10, a panel of scientists testifying before the Senate Environment and Public Works Committee confronted the uncertainty problem head-on. The witnesses discussed the natural variability inherent in the global climate and pointed out the lack of continuity and consistency of environmental measurements. They said that more research is needed. When asked by Sen. James Inhofe (R-Okla.) if the scientific uncertainties could possibly be resolved by December, the panelists admitted that there probably would not be any good answers. Some of the panelists were quick to make the case, though, that the existence of uncertainty does not mean that there is an insufficient basis for good decisionmaking. “Sound science doesn’t mean certain science,” stated Stephen Schneider of Stanford University.

At a July 15 hearing of the House Commerce Committee Subcommittee on Energy and Power, committee members made clear their dissatisfaction with the fact that the Clinton administration has not yet taken a position for the Kyoto conference, during which the international community will try to agree on a treaty that would reduce emissions of greenhouse gases that contribute to climate change. Rep. John Dingell (D-Mich.), the Commerce Committee’s ranking minority member, expressed strong concern about the possibility that developing nations such as China would be exempted from any binding emissions restrictions adopted in Kyoto. The administration, replied Timothy Wirth, Undersecretary of State for Global Affairs, is advocating a system by which developing nations could adopt the international emissions restrictions in the future as their economies “evolve.”

In a statement to the United Nations in July, President Clinton said he intended to examine the implications of climate change and emissions restrictions. Since then, he has begun meeting with interested constituencies, including scientists, industry leaders, and environmentalists, as well as members of Congress. This dialogue will form the basis for the administration’s position in Kyoto.

Congress deeply split over encryption technology regulation

Issues involving regulation of data encryption technology are continuing to trouble Congress, which has been trying to weave a path between the administration’s need to protect national security, the software industry’s desire to export the most sophisticated encryption software, and the concerns of free-speech advocates who believe restrictions on encryption software would violate civil rights.

Although a bill introduced in early 1997 by Rep. Bob Goodlatte (R-Virg.) to liberalize current export controls on encryption products has garnered more than 250 supporters, similar legislation introduced by Sen. Conrad Burns (R-Mont.) has failed to attract support in the Senate. In June, Sen. John McCain (R-Ariz.), chair of the Commerce Committee, and Sen. Bob Kerrey (D-Neb.) introduced a bill at odds with both the Goodlatte and Burns bills.

The McCain-Kerrey bill differs from the Burns and Goodlatte bills on several key points. The Burns and Goodlatte bills would eliminate existing export controls, whereas the McCain-Kerrey bill allows producers to export encryption products involving the Data Encryption Standard, a 56-bit encryption algorithm. To export a more sophisticated product, sellers would have to include a “key-recovery” capability that would enable court-authorized law enforcement officials to decode the encrypted data if necessary.

The McCain-Kerrey bill also goes far beyond the scope of the Burns and Goodlatte bills by outlining the legal parameters of a voluntary key-recovery infrastructure. With such an infrastructure, encryption users would have the option of depositing electronic keys to their encrypted data with the proper authorities.

The McCain-Kerrey bill comes much closer to the administration’s position than have previous legislative proposals and has thus received administration support. The administration has long been pushing for some form of a national key-recovery system in order to give law enforcement and national security agencies the power to tap into encrypted communications.

Report on human cloning criticized

Some members of Congress have criticized the National Bioethics Advisory Commission’s (NBAC’s) recent report on human cloning, saying that the commission’s recommendations do not go far enough in providing guidance on the emotionally charged issues of cloning and genetic manipulation.

The commission released its report on the ethical, legal, and social implications of human cloning on June 7. Adopting the NBAC’s recommendations, on June 9 the Clinton administration proposed legislation to extend the current ban on federal funding for research involving the cloning of a human being. Three bills, two in the House and one in the Senate, to prohibit research involving cloning a human have already been introduced in Congress.

Citing concerns about risks to the human fetus and the need for further debate on the ethical and legal issues involved in cloning humans, the commission said that at this time it would be wrong to try to create a child through somatic cell nuclear transfer, the technique that was used to create the now-famous Dolly, a lamb cloned from an adult sheep’s cells. The commission said, however, that use of the cloning technique may produce scientific and medical benefits, and urged that its use for research purposes which do not involve human reproduction not be impeded

By focusing on somatic cell nuclear transfer, the commission steered clear of the controversial issue of human embryo research. Under current law, it is illegal to use federal funds to conduct research that involves the creation of a human embryo. The NBAC’s report argues that human embryo research has already received considerable time and attention from Congress and the administration.

Some members of Congress, however, were critical of the panel’s silence on the embryo research issue. Because the report is very specific in opposing the creation of a child through cloning, some members of Congress have interpreted this to mean that the NBAC has tacitly endorsed embryo research as long as the embryo does not develop into a child.

Shortly after the NBAC report was released, Sen. Christopher Bond (R-Mo.) issued a statement saying that “I had hoped that the federal ethics commission would not be afraid to make a strong moral statement that human cloning is wrong, period, and should be banned. But when it came to the tough questions, they punted, and now it will be up to Congress and state legislatures to resolve those issues.”

Asked whether the commission had endorsed embryo research involving cloning, Harold Shapiro, the NBAC’s chair, and other commissioners have answered with resounding no‘s. They said that the NBAC had resolved early on that embryo research was beyond the scope of its assignment from the president.

Bond also expressed concern over the NBAC’s recommendation that federal cloning legislation be subject to review and revision in five years through a “sunset clause.” He said that because he and many others believe that human cloning will never be morally permissible; there is no need to allow the possibility of revising or rescinding a cloning ban. “They are leaving the door wide open to future cloning,” he stated.

Rep. Vernon Ehlers (R-Mich.) echoed Bond’s concern about the sunset clause, stating that he would rather enact a law without such a clause, with the cloning ban legislation reviewed and amended on an as-needed basis.

Federal Power Dinosaurs

New power-generating technologies and low natural gas prices are spurring competition in the electricity market. This has led a growing number of lawmakers to want to help break up the utility industry’s monopolies, but few have paid attention to the utility monopolies owned by the federal government. The time has come for them to face up to the cost of government ownership and to confront the question of whether federally-owned power companies have a place in a competitive utility marketplace.

No doubt there was a time when the Tennessee Valley Authority (TVA), Power Marketing Administrations (PMAs), and Rural Electrification Administration (REA) were needed. As late as 1935, when only 15 percent of rural Americans had electricity and trying to obtain electricity often required a pitched battle with what were then unregulated utility trusts, TVA, PMAs, and REA were necessary and effective. But the electricity market has changed dramatically in the past 60 years, and federal utilities have become dinosaurs, creations that worked well in one environment but are resisting the changes needed to adjust to the realities of a new era. Moreover, the government’s power program, designed originally to help poor farmers, now provides subsidized electricity to a vast array of individuals that includes condo owners in Aspen, golfers in Hilton Head, and the fabulously wealthy entrepreneurs of Palo Alto.

The federal government has become this nation’s largest generator of electricity, supplying some 9 percent of U.S. power. TVA (the country’s biggest producer of electricity), Bonneville, and the other PMAs typically sell electricity wholesale, giving preference to rural electric cooperative or municipal utilities serving communities in the West and South; 21 percent of their generation is sold directly to customers, most notably aluminum companies in the Pacific Northwest. These federal utilities receive substantial taxpayer subsidies and aim to provide power to their “preference” customers at below market rates.

Although current beneficiaries would like their federal power subsidies to continue indefinitely, those benefits distort the market, discourage efficiency, waste taxpayer dollars, and pit regions against each other. As lawmakers try to create a level playing field in the electricity market, they simply cannot exempt from competition some of this nation’s largest utilities. Perhaps most important, they cannot prohibit a large number of U.S. consumers from enjoying the advantages of competition.

A waking giant

Electric utilities are this nation’s largest industry. They have assets in excess of $600 billion and annual revenues of about $200 billion, almost 30 percent more than the U.S.-based manufacturers of automobiles and trucks. They operate a vast technological complex, with thousands of power plants and hundreds of thousands of miles of transmission lines. Virtually every American is connected to this industry, confidently able to flick switches and turn on lights, heaters, and appliances. Emerging businesses, computers, robotics, and electronics, rely on electric companies to supply the reliable and standardized power that is necessary for sophisticated machinery to operate.

For almost three generations, utilities have guaranteed universal access to power in exchange for government-sanctioned returns on their investments. Most utilities have been integrated monopolies, generating, transmitting, and distributing electricity to consumers in their exclusive service territories. These monopolies provided Americans with relatively low-cost and reliable power.

Technological changes and relatively low natural gas prices (caused in part by natural gas deregulation), however, are dismantling this utility industry compact and structure. Small-scale electricity generators, using combined-cycle gas-fired turbines and other innovative technologies, now are able to produce electricity below the utility industry’s average price, creating substantial pressure to open power markets. These technologies are being advanced by a new generation of entrepreneurs, who have quickly created a multibillion-dollar business and introduced competition into the electricity market for the first time in 60 years. State and federal regulators, following the example of telephone deregulation, plan to break up electricity monopolies, allowing consumers to purchase power from an array of competitors.

Much has happened in just the past few years. The Energy Policy Act of 1992 and the Federal Energy Regulatory Commission’s subsequent orders have opened the wholesale power market to competition and required utilities to provide nondiscriminatory access to their interstate transmission lines. California, New Hampshire, Pennsylvania, Massachusetts, and Rhode Island have adopted specific plans to achieve retail competition, and most other states are considering the issue. Numerous federal lawmakers have introduced bills to advance utility deregulation.

The traditional utility’s functions are being divided. It seems likely that competitive electricity-generation firms will produce the power, federally regulated companies will transport it across high-voltage transmission lines, and state-regulated monopolies will distribute the power through their wires to individual consumers and businesses. Federally chartered independent system operators will ensure the grid’s stability and fair competition.

Optimists see enormous benefits from competition: lower prices, technological innovations, higher efficiencies, and better services. A study by Citizens for a Sound Economy, which has been criticized for being overly rosy, estimates that retail competition will cut electricity bills by as much as 43 percent and save customers $107.6 billion annually in the long run. Even less upbeat forecasts, however, predict significant consumer benefits. When New Hampshire began its pilot program in spring 1996, for instance, some 30 companies offered to provide electric power to consumers at advertised rates ranging from 10 to 20 percent below the state’s current average. Beyond reduced prices, advocates of retail electricity competition see an opportunity for utilities to follow the example of the telephone industry, which developed a host of new consumer services after deregulation.

The profound changes associated with utility deregulation, of course, raise numerous challenges for policymakers and regulators. They also have launched a cottage industry of lobbyists trying to influence this restructuring. The prospect of increased competition in electricity generation changes the industry’s political dynamics. For almost a century, the major struggle within the electricity market pitted government-owned against shareholder-owned utility monopolies. But a more significant contest has emerged: a battle among an array of businesses , including independent power generators and marketers as well as spin-offs of traditional private and public utilities , competing to supply electric power to consumers. The need for government-owned power companies, therefore, has been called into question.

Living in the past

The early history of federal power still dictates the tone of the debate by some public power advocates and critics. For many proponents of the status quo, even though circumstances have changed dramatically, TVA and the PMAs remain a political cause, and the conflict between shareholder-owned and government-owned utilities remains the key to competition.

In the 1920s, the absence of electric power in the countryside, wrote historian William Leuchtenberg, divided America into two nations, “the city dwellers and the country folk.” Farmers, he said, “toiled in a nineteenth-century world; farm wives, who enviously eyed pictures in the Saturday Evening Post of city women with washing machines, refrigerators, and vacuum cleaners, performed their backbreaking chores like peasant women in a preindustrial age.”

Country folk certainly tried to “go electric.” Time and again they asked the power trusts for service, only to hear utility executives decry rural electrification as too expensive. Power company officials also argued that profits would be low because farmers couldn’t afford appliances that used substantial electricity. Although rural politicians published studies to disprove the company statistics, utility holding trusts, which at that time were unregulated, multistate monopolies, controlled the switch, and lighting America’s farms and small towns remained a dim hope.

The debate during the early part of this century was personified by George Norris, a Republican senator from Nebraska, and Sam Insull, chairman of Commonwealth Edison, at that time a giant utility holding company that controlled electric service in 6,000 communities in 32 states. Although both men believed that electricity would be generated and distributed by a monopoly (a point now called into question), they debated fiercely about whether electric power should be privately or publicly controlled. Insull and other business leaders felt that U.S. strength depended upon the marketplace allocating resources and production; they ridiculed public ownership as “socialism.” But Norris and a growing group of progressives and conservationists believed that a privately owned monopoly would “eventually … come to tyranny,” and that U.S. power development must “be under public control, public operation, and public ownership.”

Lawmakers need to resist equating the welfare of rural consumers with the interests of rural public power managers.

Spirited arguments long focused on control of dams along the Tennessee, Columbia, Colorado, and other mighty rivers. President Franklin Roosevelt capped a lengthy battle by promoting the Tennessee Valley Authority, which he considered a cornerstone of his New Deal and “the widest experiment ever conducted by a government.” FDR also advanced the Rural Electrification Administration, which was to provide low-interest loans to public cooperatives that would build their own power lines and generate their own electricity. It was in the same era that FDR and Congress approved the Public Utilities Holding Company Act (PUHCA), which limited each utility holding company to a single integrated operating system and which is the source of much current debate. In addition, during this period states began to regulate private utility companies more intensely.

By the 1930s, public power had become a political cause. TVA’s first director, for instance, titled his book on the federal agency, TVA: The March of Democracy. Even Woody Guthrie, admittedly with a payment from the federal Bureau of Reclamation, wrote a ballad, Roll on Columbia, that praises the huge hydroelectric projects and is still sung in elementary schools and at utility picnics throughout the region.

Although federal power’s early history still stirs the political fervor of some public power advocates, the electricity market has changed dramatically since the 1920s and 1930s. Consider several of these changes. First, although only 15 percent of U.S. farms enjoyed electric power by 1935, electric lines now reach 99.9 percent of U.S. homes. Senator Frank Murkowski (R-Alaska), chairman of the Senate Energy Committee, recently noted, “Once upon a time it was good public policy to bring power to regions of the country that did not have it, but that need has long since ended.”

Second, the economies of the Tennessee Valley and many other rural areas have improved significantly. Rural America now is home to thriving businesses such as Saturn Automotive and Gateway Computer. Third, the missions of federal power entities, particularly TVA and Bonneville, have changed dramatically. Less and less of their focus is on economic development, recreation, or navigation. They have become primarily power companies, looking very much like investor-owned utilities, their former nemeses. TVA, in fact, recently proposed abandoning all of its nonpower activities, including navigation, land stewardship, and economic development.

Fourth, and perhaps most important: The original justification for government-owned power providing a competitive yardstick by which to judge private power companies is irrelevant in today’s era of independent power producers, power marketers, and load aggregators. All of these entities including federal power managers are vying to compete in the new electricity market.

Denying the obvious

The “s” word (“subsidies”) provokes heated arguments from PMA/TVA beneficiaries, largely because taxpayer subsidies are the soft underbelly of public power’s lobbying stance. When lawmakers are trying to reduce the federal deficit, it’s simply hard to defend taxpayer benefits going to help the people of Aspen, Palo Alto, or Hilton Head pay their electricity bills.

PMA and TVA backers on Capitol Hill long have used their political clout to forbid the expenditure of government funds even to study subsidies provided to government-owned utilities. Yet the political dynamics are changing, and independent auditors have begun to document substantial taxpayer benefits being provided to a few preferred customers. The General Accounting Office (GAO), an investigative arm of Congress, has conducted perhaps the most extensive audit to date. Its September 1996 report found that together the three smallest PMAs, Western, Southeastern, and Southwestern, fail each year to recover some $300 million of their costs, shifting that burden from their ratepayers to taxpayers across the country.

Economists have tried to calculate the subsidy to government-owned utilities in several ways. Whereas the above-mentioned GAO report examines PMA costs that are not covered by electricity charges, the Congressional Budget Office (CBO), in its annual review of how to reduce the federal deficit, argues that the federal Treasury could obtain an additional $350 million each year if PMA power were sold at market rates. The U.S. Energy Information Administration takes yet another approach, calculating that PMAs benefit from government loans that provide an annual interest rate subsidy of approximately $1.2 billion. Thus, although government auditors may differ in their methodologies, they all agree that federal power subsidies are substantial.

Putnam, Hayes & Bartlett, an international economics firm in a report for investor-owned utilities, examined taxpayer benefits to the Tennessee Valley Authority, including the agency’s exemption from federal and state income taxes and from other state and local taxes, and its lower financing costs because the agency’s bonds are partially tax exempt. The researchers quantified those subsidies and competitive advantages at more than $1.2 billion annually.

Not counted in these studies are the favorable borrowing rates that PMAs and TVA obtain by being associated with the federal government. TVA, for instance, despite having a massive debt of $28 billion (and a negative net worth after subtracting unproductive assets), enjoys a AAA bond rating, the highest available. No shareholder-owned utility, despite much better balance sheets, has such a rating. Even though the federal government does not guarantee TVA bonds, the rating agencies assume that such backing exists. According to Moody’s Investors Service, “Although TVA’s debt is not an obligation of the U.S. government, the company’s status as an agency and the fact that the government is TVA’s only shareholder, indicates strong `implied support’ (that) would afford assistance in times of difficulty. This implied support provides important bondholder protection. TVA’s extensive nuclear risk, average competitive position, and high level of debt would make it unlikely to maintain its current (AAA) status.” Several analysts suggest that TVA’s large debt and low cash flow should cause its bonds to be rated as below investment grade–that is junk bonds. TVA’s artificially high credit rating, therefore, allows the giant utility to issue high levels of debt at low cost. If the agency’s credit rating went from AAA to A (a typical utility’s level), its additional financing charges each year would be some $2.2 billion. Such charges would be even higher if TVA bonds were rated as below investment grade.

PMA and TVA accounting practices have been designed to protect a few select ratepayers in the West and South at the expense of taxpayers throughout the country. Consider that when PMAs constructed their turbines and transmission lines they got to borrow at below-market rates, and to obtain 50-year repayment periods. Although federal legislation said PMA activities were to be “consistent with sound business practices,” the agencies were allowed to pay simple rather than compound interest, to backload the payments, to repay the debt having the highest rates first, and to cover any additional construction costs under the original low-interest loan.

The American Public Power Association (APPA) and the National Rural Electric Cooperative Association (NRECA), the lobbying groups representing the municipal utilities and cooperatives receiving PMA and TVA power, continue to deny the existence of taxpayer benefits. Responding to the above-mentioned GAO study, for instance, NRECA declared: “The PMAs operate as a self-sustaining, no-cost program that actually will return billions of dollars in revenue to the U.S. Treasury each year.” GAO auditors subsequently rebutted each of the lobbyists’ claims, again demonstrating how U.S. taxpayers are picking up the slack. The TVA/PMA beneficiaries themselves inadvertently acknowledge the subsidies when they oppose privatization on the grounds that it will drive the price of power through the roof. They can’t have it both ways, and increasingly lawmakers believe the subsidy calculations of the nation’s top independent auditors rather than of the self-interested recipients of TVA/PMA power.

To divert attention from subsidy studies, TVA/PMA beneficiaries often complain that shareholder-owned utilities are the ones with all the breaks. Yet this approach is backfiring on the managers of government-owned power companies. In a March 1997 hearing before the House Appropriations Committee, TVA’s chairman stated, “If there are any advantages at all, they go to the private power companies and not to us.” Rep. Mike Parker (R-Miss.) quickly retorted: “If private power companies have it so good, then TVA should become one. If they’ve got it so good, I want you to be in that system.”

When lawmakers are trying to cut the federal deficit it makes little sense for taxpayers to subsidize the electricity bills of select consumers.

The existence of taxpayer subsidies for the preferred consumers of government-owned utilities can no longer be hidden or denied. Such benefits will be increasingly hard to defend in this era of deficit reduction and electricity competition because they waste taxpayer dollars as well as distort a competitive market.

Managers of federally-owned utilities often argue that they are more efficient than their private counterparts. Yet the Tennessee Valley Authority suffers a debt of $28 billion, enough to cause the U.S. General Accounting Office to question the giant utility’s long-term viability. The Bonneville Power Administration has a $16.1-billion debt, much of it the result of ill-fated investments or stranded costs in overpriced and unneeded nuclear power plants. The Rural Utilities Service, which provide low-interest loans to cooperative utilities, holds more than $11 billion in government-guaranteed rural electric loans that are in default or classified as problematic. No doubt investor-owned utilities also overbuilt and face stranded costs. Yet policymakers should be skeptical of assertions that public power managers have a monopoly on foresight and management skills.

A key issue of the utility deregulation debate will be if and how utilities should recover their investments in power plants that are no longer competitive. Stranded costs in the public power context take on some particularly troubling aspects. For private utilities, the worry is that shareholders will lose out if stranded costs are not recovered from ratepayers. For TVA, PMAs, and rural electric cooperatives, however, it is the U.S. taxpayer who will lose out if public power’s stranded costs are not recoverable. A growing number of fiscal conservatives worry about a massive government bailout and a big headache for taxpayers if ratepayers fail to pay for public power’s stranded costs.

Other evidence of PMAs’ poor management has been revealed by the House Committee on Resources: “The federal power program suffers, in many areas, from poor operating and maintenance practices, questionable `investments’ in the underlying facilities, and in some cases poor design and construction criteria.” [It’s unusual to see a statement from a committee. Is this in some type of report approved by the whole committee?] These practices have led to power outages lasting days or even years. The August 1996 massive power outage in the West has been attributed in large part to the failure of the Bonneville Power Administration to maintain its equipment and appropriately manage the federal facilities.

Manning the barricades

TVA/PMA customers are beginning to wage an aggressive lobbying campaign to save their taxpayer benefits. One of their chief claims is that reforming government-owned electricity companies will hurt rural consumers. Yet it’s shareholder-owned utilities that supply a full 60 percent of rural America’s power. These private firms also service four out of five small-town consumers. Based on these statistics, lawmakers need to resist equating the welfare of rural consumers with the interests of rural public power managers.

Federal power advocates also try to paint a threatening picture that competition will leave certain rural customers behind, just as airline deregulation reduced service to remote locations. But electric poles and wires in rural areas, be they owned by public power or private power, are already in place and will not be torn down. Unlike airline routes, the electricity distribution infrastructure is a fixed, immovable asset. If lawmakers set a truly level and competitive electricity market, the result will be that rural consumers can choose from scores of competitive enterprises wanting to provide power to those distribution networks.

Some public power advocates also claim that deregulation will raise the price of electricity to rural consumers, but electricity rates for 70 percent of rural cooperatives are already higher than the rates charged by neighboring investor-owned utilities. In 15 percent of the cases, rates charged by rural cooperatives are 40 percent higher than those of private power companies, demonstrating that even with subsidies some government-owned utilities cannot compete with shareholder-owned firms.

Federal power managers also argue that the status quo is necessary to protect against market domination by shareholder-owned utilities. They suggest that recent mergers among private companies signal a return to the giant utility holding companies of the 1930s. Charges of market domination require a moment’s reflection. What are we talking about? Market domination certainly doesn’t exist in the transmission system, noting that the Energy Policy Act and recent FERC actions require open access to all transmission lines and that FERC regulates transmission charges. It certainly won’t exist in the distribution system, which will continue to be a state-regulated function.

Is there, therefore, domination within the generation market? No. A plethora of companies want to compete in the electricity market. Independent power marketers are providing competition, and they don’t fear market domination by anyone. In fact, market domination will exist only if policymakers allow today’s utilities, be they shareholder-owned or government-owned, to maintain monopolies over their service territories. Monopolists opposing customer choice, those not wanting to open their systems to competition, are the only ones to fear when it comes to market domination.

In another scare tactic, PMA/TVA managers and their beneficiaries suggest that reform will cause environmental damage. Privatization, of course, would get the federal government out of the electricity business, but it certainly would not eliminate the federal role in protecting America’s rivers. Even the most ardent of privatization advocates talk about selling only the hydroelectric assets of PMAs and TVA. Their plans would have the federal government still control the dams and the water flows and still provide navigation, irrigation, fishing, habitat protection and restoration, and other recreational benefits.

Reform actually offers the opportunity for substantial environmental improvements, largely because today’s below-market electricity rates charged by TVA and PMAs provide little incentive for efficiency. A recent study by the Natural Resources Defense Council found that TVA was one of the most polluting utilities in the United States. Market discipline, in contrast, would curb wasteful energy consumption, the construction of unnecessary power plants, and the generation of inordinate pollution caused by the government’s subsidies. Privatization also would allow renegotiation of the terms for operating hydropower facilities, setting future dam-use priorities and improving the protection of endangered species and habitats.

Paths to progress

PMA/TVA beneficiaries are a powerful special interest group. In 1996, rural coops alone contributed almost $1.5 million to favored candidates. But the political dynamics of the utility debate have changed dramatically in the past few years. The arguments of PMA beneficiaries, for instance, have become harder to defend. When lawmakers are trying to cut the federal deficit, it makes little sense for taxpayers to subsidize the electricity bills of a few select consumers. When lawmakers are trying to encourage energy efficiency, it makes little sense for the federal government to offer below-market power that encourages consumption, waste, and pollution. When lawmakers are trying to bring competition into the electricity market, it makes little sense for Congress to exempt a large segment of the industry from that competition.

TVA/PMA reform finally has become a real possibility. Reform options, of course, vary substantially. The most direct approach would be to privatize the federal government’s hydroelectric assets. Another would be for the federal government to maintain control of its turbines and transmission lines but to sell its power to the highest bidder. To assuage the current beneficiaries of federal power, lawmakers could provide them with the right of first refusal for that power at market rates.

Over the past decade, electricity privatization programs have been launched by at least two dozen other countries, including highly developed nations such as Australia and Britain, developing countries such as Argentina and Brazil, and former communist states such as Hungary and Poland. Senator Murkowski, who knows first hand about the privatization of the Alaska Power Administration, argues: “When the rest of the world is trying to get government out of business, so should we.”

Some conservative lawmakers who represent TVA/PMA beneficiaries have conflicting positions on the issue of privatization. Most argue vehemently that the federal government should get out of all business ventures and let the free enterprise system work its wonders. Yet when it comes to subsidized electricity for their constituents, some of those same politicians maintain that Washington should continue to own and control the nation’s largest electric utilities. This is not the 1930s, and there is no market failure to justify government intervention in the electricity market. Indeed, one could argue that there’s far more justification for the Air Force to provide rural airplane service than there is for the federal government to generate electricity.

There’s far more justification for the Air Force to provide rural airplane service than there is for the federal government to generate electricity.

Power brokers, independent power producers, shareholder-owned utilities, and investment bankers all have expressed an interest in buying PMA/TVA assets. Peter Lynch, the former manager of Fidelity’s Magellan Fund, noted, “There has never been a serious effort to privatize the TVA, but if there was I would be the first in line to get a copy of the prospectus.” William Malec, TVA’s former chief financial officer, agrees that the time for privatization has arrived and that TVA’s hydroelectric assets could fetch some $10 billion on the open market. “Selling off TVA is a natural next step,” says Malec. Tucson Electric, a shareholder-owned utility, has offered $550 million for just the Arizona assets of the Western Area Power Administration (WAPA). Otter Tail Power Company, another private utility, has submitted a separate offer to purchase other WAPA assets.

Rather than sell federal hydroelectric assets outright, some economists argue that the government should simply auction off its power. Yet even such a straightforward proposal for having all U.S. electricity sold at market rates frightens TVA/BPA beneficiaries. Bonneville’s chief recently argued that over the long term the status quo would “be a super deal for Northwest customers, much better than market-based pricing.”

Current procedures for selling federal electricity are troubling. When Bonneville recently contracted 200 megawatts of power to a Southern California marketer, for instance, it did not need to conform with the Mineral Leasing Act of 1920, which sets safeguards and procedures associated with federal sales of coal, oil, or natural gas in order to ensure that taxpayers receive “fair market value.” PMAs and TVA, in fact, are not beholden to any procedures that require full disclosure, minimum bid procedures, appeals, audits, and judicial reviews. As a result, billions of dollars worth of power produced at taxpayer expense are being sold with less care than would accompany the sale of surplus federal property such as used typewriters or fill dirt.

Representatives Bob Franks (R-NJ) and Marty Meehan (D-MA), co-chairs of the Northeast-Midwest Congressional Coalition, have introduced legislation to ensure that federal taxpayers get fair market value for federal electricity. Electricity sales to the highest bidder then would return to U.S. taxpayers the billions of dollars they’ve invested in generating facilities and transmission lines. It would provide a reliable source of funding for maintenance and upgrades at these facilities. With audits, appeals, and judicial reviews, such a system also would curtail today’s backroom deals and block potential corruption.

Utility restructuring legislation must address federal power, if for no other reason than the fact that government-owned utilities represent a significant segment of the electricity industry. While acknowledging public power’s proud history, it’s important to realize that the electricity market has changed substantially, and it will change even more dramatically in the next few years. TVA and the PMAs, just like the rest of the electricity industry, must change as well. Federal utilities cannot continue to be sacred cows. The status quo is simply too expensive for both taxpayers and ratepayers.

A Science Funding Contrarian

The premise of Terence Kealey’s book-that scientific research would do better without government support-has naturally attracted a lot of attention and generated a lot of emotion. Kealey is an impassioned advocate of market capitalism and laissez-faire. He believes that economies and societies do a fine job of self-organizing if left alone and that governments almost always are incompetent and often venal. His attitudes toward science policy strongly reflect these opinions.

He is fascinated by economic, technological, and scientific history; has read widely (if selectively) in these areas; and projects lessons he draws from history into the present. Yet Kealey, who is trained in biochemistry and medicine and teaches in the Department of Clinical Biochemistry at the University of Cambridge, presents a strangely limited view of the way modern science actually works and of the complex relationships between science and technology.

Kealey’s reading of ancient and modern history leads him to articulate two major propositions about science and technology and their interaction. First, science that is nurtured by a state or a society and is isolated from the world of practice is fruitless. This almost surely is true. Second, free commerce virtually automatically generates technological innovation and economic growth. This leads, without government involvement, to the development of whatever science is necessary to support technology. Kealey argues that this natural state of affairs broke down after World War II, as government intervention increasingly stifled scientific, technological, and economic progress.

“Economic laws” unveiled

In the course of making this controversial argument, Kealey puts forth several “economic laws” of scientific research. The first law is that, in the modern world, the ratio of R&D to gross national product (GNP) tends to rise as GNP per capita increases. Kealey emphatically denies that a rise in R&D as a fraction of GNP results in an increase in GNP per capita. Rather, he holds that as per capita income grows, nations spend more on R&D partly because they can afford to do so and partly because their more complex economies draw more intensively from formal science and technology (S&T). Although Kealey’s statement of this argument is somewhat crude, many economists would broadly agree.

Kealey’s second economic law is that, given the first relationship, greater public funding of civilian R&D results in less private support. Further, according to Kealey’s third law, the net result of public spending on R&D is negative: The efficacy of the R&D effort is reduced, as is the overall level of spending. To support his case, Kealey compares Japan, where there is little public support of civilian R&D, with Britain, where there is a lot. He notes that not only is Japan’s high-tech industry more competitive on world markets, but its spending on civilian R&D as a fraction of GNP is higher.

In the last part of the book, Kealey lambastes government S&T policies, particularly in the United States and Great Britain. He has little sympathy for government efforts to bring particular civilian technologies into existence or to support certain high-tech industries-efforts that he finds inevitably clumsy and inefficient, and usually fruitless.

I have considerable sympathy for Kealey’s overall position, but his arguments here are a bit simplistic. First, some important technologies resulting from government programs have yielded high civilian payoffs. Jet aircraft and the computer are good examples. Kealey might respond that the key government-funded projects in these cases were aimed at developing military technologies, not commercial ones and that government programs in these areas for the civilian market have virtually all been expensive failures. Although the development of the Airbus jet aircraft consortium in Europe could be cited to rebut Kealey, his argument is basically correct.

On the other hand, government-funded applied research programs have produced enormous payoffs in several civilian technologies. U.S. agriculture is a striking example. In many areas, publicly funded research targeted at providing basic knowledge and tools to solve practical problems has enormously increased the power of privately funded applied R&D. Kealey’s own field of biomedical research illustrates this clearly. Virtually all modern pharmaceutical development takes place at for-profit firms funded by private money, but it relies greatly on knowledge gained through publicly funded research. It is odd that Kealey does not recognize this.

Going to extremes

Kealey’s view that government should just get out of the business of supporting science is particularly wrongheaded and dangerous. Government support, he says, is wasteful, poorly directed and managed, basically harmful to the workings of the scientific community, and simply not needed. Industry and private philanthropy can be counted on to support directly or indirectly the research that society needs and will do so in a way that is more efficient and conducive to good scientific research.

Kealey further argues that government support of science is rationalized by the linear model, and because this model simply is false, there is no case for government support. It is true that such influential manifestos as Vannevar Bush’s Science: The Endless Frontier put forth the linear model to support government funding of basic research, with control of allocation left principally to scientists. It is also true that in most areas of technological change, the linear model is not a good characterization of the relationship between science and technology. But one does not need to believe in the linear model to be a strong advocate of public support of science. And Kealey’s own beliefs about the relationships between science and technology seem as inadequate a general characterization as is the linear model. Let me pick up this latter matter first.

Contemporary scholars who study the relationships between science and technology recognize that these are complex and differ greatly from field to field. In many fields, technological advances hardly tap recent scientific developments. Kealey stresses this. Also, in may cases, technological development leads to the initiation of scientific research and even to whole new scientific fields, rather than the other way around. The field of metallurgy arose because steel became an important economic commodity. Solid-state physics became an important field of research after the birth of the transistor. Computer science obviously came after computers. Kealey points to cases like these.

But although these latter cases do not fit the linear model, they clearly involve areas where scientific research done at universities contributes to technological advance in industry. Indeed, that is the intent of research in fields such as computer science and pathology. A strong case can be made that a rapidly advancing technology almost always has a closely affiliated science.

Not all useful science is intentionally oriented toward areas of technology. There are many striking examples where the simple linear model looks right. That is, scientific research undertaken with only the broadest notions of the practical payoffs lays the basis for revolutionary advances in technology. Again, biotechnology is a good example.

One need not believe in the linear model to support government funding of university research. The fact that a field of science contributes to technical advance by intent certainly does not mean that public support of that science is not warranted. The principal arguments for public support of science are that knowledge won through fundamental research is nonrivalrous in use, and that in many cases it is difficult for a person or an organization to keep that knowledge out of the hands of others or to force all who use it to pay a fee.

The first of these arguments is persuasive by itself. Even when it is possible to make basic knowledge won through research private and proprietary, society pays a cost, perhaps a very large one. There are very sound economic reasons for keeping knowledge-a nonrivalrous good-in the public domain.

Keeping knowledge public

We badly need strong and effective arguments for keeping fundamental scientific knowledge public. With the policy discussion in the United States leading the rest of the world, the fashion increasingly is to argue that there is great value in making new science proprietary. The Bayh-Dole Act is predicated on exactly that largely dubious and in many cases quite pernicious idea. Patenting gene fragments is a clear contemporary manifestation of this belief. But access to fundamental knowledge should not be rationed, even if it can be. There are real economic costs associated with privatizing basic science.

Of course, in many areas it is quite difficult to prevent basic knowledge from trickling into the public domain. This surely is one of the reasons why, during the past 80 years, large electronics companies have abandoned or significantly decreased support of basic research. Previously, their powerful market position meant that even if the results of their research trickled out, few competitors could benefit. With the new global competition, that no longer is true. Kealey interprets the fact that private companies occasionally fund basic research as an indication that the knowledge so won is not a public good. But he additional fact that firms other than the funders benefit from the research indicates that it is. Thus, business support of basic research is limited and fragile.

Industry leaders such as William Spencer of Sematech are deeply concerned about the implications of this fact for long-term progress in microelectronics. They struggle to find alternative funding for fundamental research in that field. Frankly, Kealey’s proposal that corporations and philanthropy will fund all the valuable fundamental research is bizarre. National governments also seem to be moving away from support of basic research. Like the companies guarding against competition, nations are shifting support toward more applied research, whose results can more easily be captured nationally.

It is not clear how much influence Kealey’s ideas will have on policy. My suspicion is that those who already believe his argument about privatizing basic research support will pick up and trumpet it. Those who understand the very powerful case for public support of science with the objective of keeping science public will ignore or explicitly reject it. Now is the time to rearticulate the need for and the payoffs from publicly funded fundamental research. Perhaps the blatant extreme of Kealey’s position will serve the useful purpose of focusing the arguments of those who believe in public science.

Pesticides: Kids at Risk

Pesticides are chemicals designed to kill living things-insects, fungi, and weeds that attack crops and other vegetation, cause infectious diseases in humans and animals or act as vectors of infectious agents. Not surprisingly, they are toxic to nontarget species, including birds and humans. There are 600 pesticides currently registered for use, of which 325 are known to remain as residues in food products; and 60,000 products include pesticides among their ingredients. Several hundred billion pounds of pesticides have been produced and released into the global environment. Nevertheless, pests still destroy an estimated 37 percent of the annual global production of food and fiber crops, and diseases thought to be controlled by the eradication of insect vectors, such as malaria and dengue fever, are resurgent. Huge numbers of agricultural workers and their families are exposed to pesticides and are generally poorly educated in the safe use of such chemicals. The rest of us are exposed at much lower levels through contamination of drinking water, air, and food by pesticides, and especially by too-casual use of pesticides in and around the home.

John Wargo, a professor of environmental policy at Yale University, served as a consultant for two major National Research Council (NRC) reports dealing with pesticides: The Delaney Paradox (1985-87) and Pesticides in the Diets of Infants and Children (1988-93). Our Children’s Toxic Legacy: How Science and Law Fail To Protect Us From Pesticides provides a very good insider’s account of the challenges identified and addressed by those panels, a comprehensive examination of pesticide use and regulation, and a proposal to focus science and regulation on protecting children against risks from pesticide residues. Wargo covers the history and ecology of pesticide use, the properties that lead to persistence and/or widespread dispersal of these chemicals, the reasons for insect resistance, some crop management strategies underlying more efficient use of modern pesticides, and the responsibilities and poorly coordinated approaches of the U.S. Department of Agriculture (USDA), Environmental Protection Agency (EPA), and Food and Drug Administration (FDA). Some of these sections take the reader pretty far from the major thesis of the book, captured in the title, but the book is loaded with interesting data and distressing problems.

Agriculture’s power

Wargo’s major theme is that agricultural interests have dominated pesticide regulation for decades and have helped create an expensive and ineffective regulatory morass that puts consumers, especially young children, at risk from pesticide residues in food. Arguing that the institutional structure for controlling the distribution and uses of pesticides has been insensitive to new information about risks and benefits, he proposes that there be clear quality standards for evidence used to estimate risks; robust analysis of the variance and uncertainties in the magnitude and distribution of exposure and risks, especially for children; greater understanding of the legal, organizational, educational, and cultural conditions surrounding pesticide use; estimation of the cumulative effects of exposures to multiple agents; and a regulatory system in which the burden of proof of safety is placed squarely on the producers. Furthermore, he wants effective communication with the public about risks as well as international agreements to maintain and share records of pesticide use, labeling, and regulatory status. He supports replacing the Delaney Clause, which bans the sale of any food product that contains any trace of a carcinogenic pesticide that concentrates during food processing, with a rule that would permit the presence of pesticides at levels that cause only negligible risk but would expand protection to include all health effects (not just cancer) and would include all pesticides (not just those that concentrate during food processing). Most of all, he wants to replace the provisions in the Federal Insecticide, Fungicide, and Rodenticide Act and the Federal Food, Drug, and Cosmetic Act that require that the risks associated with a chemical outweigh its benefits before its use can be banned with a standard that would forbid the use of all chemicals that create a more-than-negligible risk to human health. After calling for all of these technical assessments, however, he predicts that future historians will be “puzzled over the late-twentieth-century obsession with technical forms of analysis that distracted us from seeing the relatively simple moral dimensions of the pesticide problem.”

Issues readers who are veterans of NRC panels (or other advisory groups) will find it interesting that both panels went far beyond the standard evaluation of existing literature. Both determined that the published literature was inadequate for their task, that existing federal databases had not been tapped, and that new analyses should be conducted from primary data in EPA and FDA files. As related by Wargo, the Delaney Committee tried to judge the effectiveness of federal law in controlling cancer risks from pesticide residues in foods. To estimate exposure, they tried to link food intake data collected by USDA, data on tolerances (maximal allowable concentrations) for pesticide residues in foods set by EPA, and data collected by FDA on actual residue levels in foods. The data systems were completely incompatible. Acknowledging the deficiencies of the data, the committee used what it had to develop its best estimate of risk.

Although published with careful caveats, the conclusions were criticized by manufacturers and farmers for assuming that everyone ate all the foods and that the foods were contaminated at the maximal allowable level, and by environmentalists and consumer advocates for ignoring the combined effects of exposure to numerous pesticides. Nevertheless, the committee did identify the “paradox”: The zero-risk standard of the Delaney Clause was so stringent that it was rarely enforced. The committee claimed that switching to a negligible-risk approach would eliminate 98 percent of the risk attributed to the 28 carcinogenic pesticides that they studied closely. But risk for whom? Are some people at markedly different risk due to age, genetics, multiple exposures, particular dietary habits, marked variation in level of residues, and other factors? Yes indeed, say Wargo and everyone else who has examined this question. The most important subgroup that was systematically neglected was young children, whose diets are dominated by fruit juices and vegetables, whose intakes are much higher per kilogram of body weight, and whose developing organ systems and metabolism make them more vulnerable. Thus, promptly after publication of The Delaney Paradox, the NRC Board on Environmental Studies and Toxicology launched the panel that produced Pesticides in the Diets of Infants and Children. After protracted efforts to penetrate the FDA databases, landmark analyses were published. These are summarized in considerable detail in Wargo’s book and reflected in his recommendations. However, the government was slow to take action to better protect young children in the years after Pesticides in the Diets of Infants and Children was published.

Bipartisan reform

Formal nongovernmental dialogues bringing together industry, consumer, and environmental advocates finally framed a consensus that the Clinton administration and the Congress could embrace with the Food Quality Protection Act of August 1996. Unlike the “regulatory reform” proposals of the 104th Congress that generated heated debate, this bill reflected a bipartisan compromise that offered something for everyone. Deregulatory forces welcomed the end of the Delaney Clause absolutism with respect to pesticide residues, and environmentalists and consumer advocates were pleased that tolerance levels were reduced to reflect the vulnerability of children.

Moving quickly to implement the 1996 legislation, EPA announced in March 1997 the establishment of a single health-based standard for all pesticide residues in both raw and processed foods and a risk-based approach along the lines recommended by the Presidential/Congressional Commission on Risk Assessment and Risk Management. When establishing tolerance levels, EPA will now use an additional 10-fold safety factor to reflect the risk to children (0.001 instead of 0.01 of the no-observed-effect level in rodents), unless there are sufficient data to justify a smaller evidence-based safety factor. EPA’s analyses will aggregate exposure to the specific pesticide from all nonoccupational sources; include the effects of cumulative exposure to additional substances with similar mechanisms of toxicity; consider the effects of exposure in utero; and address adverse health effects other than cancer, including endocrine disruption. Despite dire predictions that this workload would be infeasible and that the registration process would be paralyzed, by March 1997 EPA had issued approvals for six new active chemical ingredients, 10 new biological pesticides, and one antimicrobial pesticide under the new law. Meanwhile, EPA Administrator Carol Browner appointed Philip Landrigan, who chaired the NRC Pesticides study, as senior adviser to EPA’s child health initiative, which also highlights lead poisoning, asthma, childhood leukemia and brain tumors, and reproductive anomalies.

These steps, when implemented for the huge array of pesticides and pesticide-containing products, should go a long way toward satisfying Wargo’s call for action. Nevertheless, he would surely still be distressed by the continued chemical-by-chemical balancing of agricultural production needs against risk-based health protection in setting residue levels that are tolerated in foods. And from a global perspective, he would criticize the continued excessive use of broad-spectrum pesticides, promotion of disapproved or restricted agents, and lack of adequate protection of workers and young children in many countries. At least the author and readers can now feel some pride that scientific analyses, policy reviews, and political action have indeed generated significant legislation, improved regulatory practices, and offered the promise of better health protection for children in the United States and throughout the world.

Fall 1997 Update

The privacy of medical records

A year ago, the issue of federal preemptive legislation to protect personal health data was mired in a heated debate within the health care community (Issues, Summer 1996). This debate effectively squelched congressional activity related to the three major bills that had been introduced to address health data protection. Not surprisingly, the 104th Congress adjourned without passing legislation to establish the much-needed national framework for protecting personal health data. Today, the debate surrounding health information privacy is far from resolved, but the issue is attracting increasing attention within and outside Congress, and several factors are increasing the likelihood that the 105th Congress will attempt to fill the current void in protection of health data.

The major factor keeping the issue of health data protection on track is the passage of the Health Insurance Portability and Accountability Act. Its administrative simplification provisions mandate the National Committee for Vital and Health Statistics (NCVHS) to study health care information standardization, security, and privacy issues. The law stipulates that if Congress does not enact health privacy legislation by August 1999, the Secretary of Health and Human Services must consult NCVHS and promulgate standards on rights, procedures, and appropriate uses of health data. Thus, even if Congress fails to act, there will be some form of federal health data protection by 1999. However, in a June 1997 report to the Secretary of Health and Human Services, NCVHS stated that existence of regulatory authority is not an adequate alternative to legislation and recommended that the 105th Congress enact a health privacy law before it adjourns. Secretary Shalala responded to the NCVHS report by announcing that the Department of Health and Human Services would soon send recommendations to Congress for federal legislation.

The forthcoming legislation will join two other bills already introduced that address health information privacy and practices. In addition, several bills have been introduced that specifically limit the disclosure and use of genetic information. And as part of the recent budget reconciliation bill, an amendment to the Social Security Act requires health care providers who participate in a specific Medicare program to establish procedures that safeguard the privacy of individually identifiable information, maintain records in a manner that is timely and accurate, and assure timely access by enrollees to their records.

According to Secretary Shalala, five principles will guide the recommended legislation. First, with very few exceptions, a health care consumer’s personal information should be disclosed only for health care. Second, individuals who legally receive health information must safeguard it. Third, citizens must have the ability to learn who is looking in their records, what is in the records, how to access their records, and what they can do to amend incorrect information. Fourth, anyone who uses information improperly should be severely punished. Fifth, as a society, we must balance the protection of privacy with our public responsibility to support national priorities. If legislation is passed that meets these objectives, a solid foundation for health data protection in this country will result.

Don E. Detmer and Elaine B. Steen

New life for brownfields

Since “Restoring Contaminated Industrial Sites” appeared in the Spring 1994 Issues, several federal and state policies have been introduced to encourage the reuse of the abandoned, underused, and often contaminated industrial properties known as brownfields. As a result, a growing number of successful projects are providing environmental cleanup, reducing neighborhood blight, generating tax revenues, and creating jobs. Much, however, remains to be done to overcome financial and regulatory barriers.

In April 1997, the Clinton administration announced its Brownfields National Partnership, which included more than 100 specific initiatives to link the resources and activities of more than a dozen federal agencies. The Environmental Protection Agency, for instance, expects to set aside $100 million next year to fund additional site-assessment and cleanup activities at brownfield locations. The Department of Housing and Urban Development plans to encourage local governments to use Community Development Block Grant funds and Section 108 loan guarantees for brownfield projects.

Congress in the past two years has passed two significant brownfield provisions. The first, approved in September 1996, spells out the conditions under which lenders could be held liable for loans made to polluters, making clear that normal banking functions such as loan workouts, loan processing, or foreclosures by themselves would not trigger liability for contamination.

The recently approved Balanced Budget Act of 1997 includes tax-code provisions to make it more attractive for current and prospective site owners to clean and redevelop brownfield sites. The Treasury Department estimates that the $1.5 billion in tax relief will leverage more than $6 billion in private sector brownfield activity and encourage redevelopment of at least 14,000 sites.

New directions

Some 18 brownfield bills have been introduced in the 105th Congress, and more are expected. How action unfolds will depend on the approach taken to Superfund reauthorization and on the willingness of key committee chairmen to advance independent brownfield bills. Current proposals include tax incentives to attract investment and provide a cash-flow cushion for companies undertaking brownfield reuse projects, direct capital funding for small companies that have little tax liability or that lack the cash needed to launch brownfield project, and regulatory reforms to clarify liability concerns.

Some three dozen states have established voluntary cleanup programs, which bring considerable certainty to the remediation and reuse process. Among the innovative proposals now before state legislatures are loan guarantees to private lenders making loans on brownfield properties (in Massachusetts), a contaminated-property remediation insurance fund (in Connecticut), and joint state-local property tax credits to encourage reuse by offsetting increased property values stemming from cleanup (in Maryland).

It will require action on these proposals plus much more government effort to level the economic playing field for greenfield locations and brownfield sites. In particular, Congress and the states must provide the framework that makes more brownfields viable for economic activity and encourages the private sector to invest in redevelopment projects.

Charles Bartsch

Toward a “Greener” Revolution

Thanks in large part to the now-legendary green revolution, most people in the world today get enough calories from food for their subsistence. Yet it is becoming increasingly clear that the green revolution was not an overwhelming success. Although it helped increase the production of staple foods, it did so at the expense of overall nutritional adequacy. Today, large numbers of the world’s people remain sick and weak because of terrible nutrition.

The green revolution increased the overall production of high-yielding rice, wheat, and maize, which provide most of the “macronutrients” people need in large quantities, notably carbohydrates and protein. But these foods do not provide the “micronutrients” needed in smaller quantities: iron, zinc, iodine, vitamin A, beta carotene, selenium, copper, and other compounds and essential elements that are just as critical to health. In addition, the increasing production of staples displaced the raising of local fruits, vegetables, and legumes that were the chief sources of micronutrients for most people.

As a result of this growing imbalance in food production, an insidious form of malnutrition plagues the world today. More than 2 billion people-about 40 percent of the world’s population-now face debilitating diseases because their diets are dangerously low in precious micronutrients. Children’s growth is stunted. Adults are weak and sickly, unable to resist disease and infection. This hidden hunger decreases worker productivity and increases morbidity rates, condemning people and their developing nations to vicious circles of ill health and low productivity, making it impossible to sustain economic growth. Low iron intake alone cripples efforts to improve primary school education-widely acknowledged as one the greatest levers to a nation’s advancement-because the developing brain needs iron to learn.

Worldwide, iron deficiency leaves 40 percent of all women and 50 percent of pregnant women anemic and causes up to 40 percent of the half-million deaths in childbirth each year. More than 220 million children with diets deficient in vitamin A cannot maintain their immune systems or the lining of their respiratory tracts, succumbing easily to disease and infection. Severe vitamin A deficiency also blinds up to a half-million children each year, half of whom die within six months of losing their sight. Millions of poor people with diets low in iron and zinc cannot fight off malaria, diarrhea, and pneumonia, three of the world’s leading killers.

Nutritional problems are serious in the United States, too. Four of the 10 leading causes of death here-coronary heart disease, cancer, stroke, and diabetes-are associated with diets too heavy in calories, fat, saturated fat, cholesterol, and sodium and too light in plant foods high in fiber and available micronutrients. Poor nutrition plays a central role in obesity, hypertension, and osteoporosis. Iron deficiency affects nearly 20 percent of all premenopausal women and 42 percent of all poor, pregnant African-American women. Folic acid inadequacy is increasing the risks of birth defects, heart disease, and stroke, and zinc deficiencies are affecting the immune function of the elderly and the size of infants born to poor black women, and even retarding the growth of upper-middle-income adolescents. According to the U.S. Department of Agriculture (USDA), these chronic diseases cost U.S. society an estimated $250 billion annually in medical expenses and lost productivity, of which $100 billion is directly associated with poor nutrition.

From a systems perspective, the food system is failing. Current U.S. and international policies and programs are to blame, and not changing them will have severe consequences for human health and welfare. This is not just a social responsibility; it also carries political liabilities. Malnutrition and poor health in countries overseas increase reliance on international aid. They also exacerbate social instability, leading to mass unrest and political tumult that jeopardizes U.S. international relations. A 1993 study by Jere Behrman at the University of Pennsylvania shows that investing in nutrition is one of the most economically efficient ways to strengthen market-based economies in developing nations, which would significantly benefit U.S. trade.

What the United States and the world need is a second food revolution-a greener revolution that will provide not just more food but more nutritious food, ending reliance on the extremely expensive and piecemeal distribution of supplements and food fortification programs that are now the mainstay of global nutrition policies. The failing food system can be fixed in four ways: by breeding crops that have higher micronutrient contents, increasing the diversity of food crops, reducing the losses of nutrients that occur with current harvesting and food production techniques, and changing the mix of foods eaten during meals in ways that promote better natural absorption of nutrients by the body. Pursuing these policies will require major changes in direction by domestic and international policymakers and research organizations.

Unintended evil

Feeding the developing world has been addressed primarily by increasing the production of starchy staple foods (cereal grains) and correcting specific nutrient deficiencies that lead to disease. Programs to increase production have been implemented largely through the Consultative Group for International Agricultural Research (CGIAR) and various national agricultural research organizations. Thanks in large part to the tripling of fertilizer use, a one-third increase in the amount of irrigated land, and the development of high-yielding cereal varieties, many developing nations have realized impressive gains in the production of rice, wheat, and maize. The production of rice and wheat in South Asia, for example, increased by 200 percent and 400 percent, respectively, during the past three decades. The global availability of calories rose to its present estimated level of 2,720 kilocalories per person per day, which is about 16 percent above minimum needs, preventing famines in many countries.

But the focus on grains had an unintended and unfortunate consequence: It reduced the diversity of traditional cropping systems. Farmers adopted simpler rotations of the higher-yielding and more profitable grains and abandoned lower-calorie foods that were nonetheless generally higher in protein and micronutrients. Crops such as pulses (the peas, beans, and lentils from leguminous plants) were displaced. This trend was exacerbated by the failure of plant breeding to produce higher-yielding varieties of these micronutrient-rich crops and by policies, especially free water and guaranteed prices, that subsidize grain production. Today, the production of pulses in South Asia is only 87 percent of what it was 30 years ago. At the same time, the production of fruits and vegetables did not keep up with the needs of growing populations in eight South and Southeast Asian countries. Further damage to balanced diets came from mass milling and polishing of grains, which remove the bran and germ, the parts of the grain where the micronutrients are stored. Together, these effects have resulted in lower availabilities and higher prices for micronutrient-rich foods. This particularly hurts low-income families.

Cost-effective approaches

Various organizations, such as the UN Food and Agricultural Organization (FAO), the World Bank, CGIAR, the World Health Organization, UNICEF, and others commit hundreds of millions dollars annually to improving agricultural production, health care, and disease prevention by distributing supplements (such as vitamins) and fortifying foods during production. Current programs are cost effective. World Bank studies show that the productivity gained by a country due to a healthier population (per program dollar spent on vitamin A, iron, and iodine supplements and food fortifications) can result in a benefit-to-cost ratio ranging from 2:1 to as high as 260:1.

However, sustaining these kinds of programs is a problem. They require a great deal of oversight and logistical control. When budgets get cut, the distribution system breaks down and is often not subsequently fixed. Some countries don’t add additives to foods correctly and some don’t add the additives at all. Fortifying foods requires sophisticated food processing technology that many developing countries cannot afford. Finally, many of the programs do not provide nutrient balance because they focus on one or a few nutrients, neglecting the other essentials. It is simply more effective to breed, grow, harvest, and sell or distribute a better mix of more nutritious, locally available foods.

Yet the many organizations named above devote little or nothing to improving the actual nutrient content of crops. To make matters worse, national departments of agriculture and agricultural universities in developed countries concern themselves almost exclusively with improving the yield of crops but not the nutrition content and diversity of crops in cropping systems that are fundamental to improving even an advanced nation’s health and well being.

Recently, Howarth Bouis at the International Food Policy Research Institute (IFPRI) in Washington, D.C., calculated the annual cost of an iron-fortification program in India and compared it to the spending that would be necessary for a more sustainable food systems approach: breeding iron-rich staple crops. The iron-fortification program would cost a minimum of $0.10 per person per year when all administrative costs were included. The annual cost to fortify food for only half of India’s 880 million people would therefore be $44 million. Although such an expenditure is probably cost effective in terms of return on investment, it is still a large sum of money and would need to be justified and allocated each year by the Indian government. Bouis then calculated the cost of breeding iron-rich rice, wheat, beans, maize, and cassava. The research required to develop these staples over five years was estimated to cost about $2 million for each crop, totaling $10 million for the five crops, much less than the cost of just one year of food fortification. And the plant breeding strategy is a one-time expense and is transportable to other countries. The economics of plant breeding overwhelm the economics of supplementation or fortification.

Diverse foods are key

Adequate nutrition can best be provided by basing diets on a wide variety of foods, including pulses, animal products, vegetables, and fruits. The poor, however, are usually forced to depend almost exclusively on low-cost, high-energy, but low-nutrient starchy foods. Polished rice now provides 85 percent of the caloric intake in Bangladesh, and wheat flour provides nearly that percentage of calories for the poor in Pakistan. The decreasing production of micronutrient-rich foods has been identified in many parts of South Asia, China, sub-Saharan Africa, and South and Central America.

Although continued population growth makes it imperative to continue increasing total agricultural production, focusing only on caloric output will worsen micronutrient malnutrition. In parts of Ethiopia, malnutrition has persisted despite heavy investments in agricultural infrastructure that have produced significant increases in the production of staples. Egypt has increased staple food production by more than 600 percent in 30 years, yet problems of micronutrient malnutrition such as anemia and stunting actually appear to be growing. In Mexico and Kenya, iron, vitamin A, and iodine deficiencies continue to affect enormous numbers of people even though agricultural production is at an all-time high.

A large part of the reason we are losing ground is that the health community has treated malnutrition as it does disease: Find a single fix for a single symptom. Although supplement and fortification programs in developing countries often succeed initially, they soon encounter insurmountable economic, political, social, and logistical problems, and their costs make them dependent on international support.

A typical example is Sri Lanka’s Thriposha (triple nutrient) program. It was designed to supply energy, protein, and micronutrients in a precooked cereal-based food free of charge to poor mothers and children. Started in 1973, the program was administered through school systems and clinics and became an important part of the country’s nutrition policy. Yet its goals were never reached. Instead, Thriposha was eaten primarily by the men in households, and some families deliberately kept their children underweight in order to stay eligible for the program. Some mothers used the supplement as a food replacer, which often resulted in no net increase in nutrient intake by their children. Consequently, in 1995 the Sri Lanka Poverty Alleviation Project recommended that the program be discontinued.

The vitamin A supplement program in Indonesian has also fallen short. With extensive training and close supervision of highly motivated community health workers, the program achieved 77 percent coverage of children within its first two years. But after 15 years, the coverage has dropped to less than 50 percent, largely because government motivation and support have dwindled. In Bangladesh, the “universal” vitamin A supplement program reaches only an estimated 36 percent of the target population.

Model programs

Clearly, better approaches are needed to meet the increasing nutritional demands of a world that expects to add at least 2.5 billion people during the next 25 years. A greener revolution directed at increasing the production of micronutrient-rich foods would ensure sustained improvements in health. To accomplish this task, agriculture and nutrition must be viewed in the larger context of the food system, which involves the production, distribution, and utilization of food. This requires a new mindset for the development of agricultural and food policies, in which the measure of success is not in terms of production but in terms of human nutrition and health. The principal objective of a food systems approach would not be to produce more food but to produce more healthy people.

International programs to boost nutrition by distributing supplements and fortifying foods are xpensive and difficult to sustain.

Several food system programs are already achieving this measure. In the remote Xinjiang Province of China, table salt iodinization programs failed to reduce iodine deficiency diseases for a variety of cultural reasons. But when iodine was added to irrigation water, the iodine content of all irrigated foods and feedstuffs increased, improving the iodine health of the people as well as livestock. Iodine deficiency diseases were largely eliminated, improving community and family health and providing economic gains for farmers.

In Thailand, Bangladesh, and Zimbabwe, the availability of foods rich in beta carotene, a precursor to vitamin A, has been greatly increased through concerted efforts to popularize home gardening. These programs have increased the amounts of vegetables and fruits consumed at home and sold inexpensively at local markets frequented by low-income families. Meanwhile, researchers at the International Center for Tropical Agriculture in Cali, Columbia, have identified a strain of cassava, a staple in South and Central America and western Africa, that is high in beta carotene.

Other food system approaches are being tested. Half of all arable lands are regarded by scientists as deficient in iron and zinc, despite the fact that those soils contain ample amounts of both minerals. The problem is that the roots of today’s high-yielding crops are poor at absorbing these minerals. In Turkey, where extensive zinc deficiency is causing stunting among children, current strains of wheat are unable to absorb the zinc that is tied up in the soils. In a NATO-sponsored research project, scientists are breeding high-yield varieties of wheat that can better utilize the zinc in the ground, with little or no use of zinc fertilizers. Early results are promising.

A similar strategy is being developed in a collaborative effort of IFPRI, the University of Adelaide, the International Rice Research Institute, the International Center for Tropical Agriculture, the International Center for Maize and Wheat Improvement, and USDA’s Federal Plant, Soil, and Nutrition Laboratory at Cornell University. It will determine the potential for improving rice, wheat, corn, beans, and cassava as sources of iron, zinc, and pro-vitamin A carotenoids. The effort involves screening the world’s collections of germ plasm for these crops to determine whether sufficient genetic diversity exists to breed more nutritious plants.

More nutritious food

Failing food systems can be fixed in four ways: by increasing the micronutrient content of crops, increasing the diversity of food crops, reducing the loss of nutrients that occurs in harvesting and food production techniques, and changing the mix of foods eaten during meals to promote better absorption of nutrients by the body. A good starting place would be to create a comprehensive database of the micronutrient composition of staples, vegetables, and fruits grown in regions around the world. A collaborative effort involving CGIAR and land-grant universities would bring together the expertise needed to accomplish this. These data could then be converted to “nutrient balance sheets” that policymakers could use to assess national food production plans.

Work to breed more nutritious plants should begin with increasing the nutrient content of the staples most important to the diets of the poor. The single greatest advance would be to breed new strains that take up and retain more of the essential minerals (particularly iron and zinc) from the soil. An alternative, mineral fertilizers, is an expense that poor farmers cannot afford and that does not work for iron.

New plant varieties would create an added payoff: Cereal grains with increased zinc, copper, and manganese density have more hearty seeds and are more resistant to plant disease and drought. They would raise farmers’ yields, thus lessening required seeding rates, reducing the use of pesticides, and decreasing the amount of watering needed.

Breeding strategies need to improve rice and wheat in another way. The outer layers of cells in these grains are rich in micronutrients, but those layers are removed by milling and polishing; rice and wheat are usually not produced or eaten as whole grain flour. Experiments are needed to determine whether breeding can improve mineral deposition in the edible portion of the grain, the endosperm. Progress could also come from work on genes that regulate where these compounds are synthesized in the plant. And genetic engineering could provide ways to transfer useful genes across species, so that, for example, the efficient mechanism by which beta carotene is stored in maize could be introduced into strains of rice, which are poor at this process.

Work to breed more nutritious plants should begin with increasing the nutrient content of the staples most important to the diets of the poor.

None of this breeding work has ever taken place. USDA’s Plant, Soil, and Nutrition Laboratory at Cornell University, in cooperation with several CGIAR centers and the University of Adelaide, have begun to examine breeding schemes for raising the level of micronutrients in staples. But the USDA Cornell lab is the only U.S. lab doing so. And it is not a commercial breeding operation. More research is needed, as is simultaneous work on ways to apply lab findings to commercial techniques. There is currently very little genetics work being funded in this area, either, although tweaking of genes could enhance the ability of the entire family of grains to draw iron or zinc from the soil or to synthesize more pro-vitamin A carotenoids.

Further work will be needed to make new strains high-yielding. Then they will have to be grown under local conditions in various countries to see which variations are productive in different soils and climates. Only if the final varieties are hardy and high-yielding will farmers be able to make a profit and be willing to grow them.

Land management techniques could also improve the nutrient content of staples. The manure of livestock contains some useful nutrients, and it is routinely applied as fertilizer on U.S. farms. But in areas such as South Asia, where wood and centralized electricity are scarce, dung is often burned as fuel for cooking. Developing alternative fuel sources in such regions would free up mass quantities of nature’s cheapest, most effective, and nutrient-rich soil amendment where it is needed most.

Reintroducing crop diversity on farms by rotating crops will return nutrients to the soils (the “green manure” effect) and result in more locally grown pulses, vegetables, and fruits. This might require innovative subsidies, at least in the near term. Better education, especially of the poor, is needed so people learn how to buy a more nutritious mix of foods and prepare foods so they don’t reduce the vitamin or micronutrient metal availability.

Home gardening can help, but well-meaning efforts in many countries have overestimated the nutritional impact and sustainability of home gardens because of unrealistic assessments of water scarcity, temperature extremes, availability of seeds and seedlings, fertility of soils, protection from pests and livestock, and losses due to rotting after the harvest. Local policies can help mitigate these factors, and breeding research could result in more rapidly maturing varieties of plants.

U.S. policymakers can help most directly by supporting interdisciplinary research that makes improved human nutrition an explicit goal of agriculture. Universities could change their reward structure to encourage interdisciplinary work and research on food systems. The government should also study how to make policymakers more aware of the profound negative consequences of micronutrient malnutrition.

Breeding more nutritious staples should be made a national priority. Programs that extend credit, incentives, or subsidies to farmers trying new strains would help turn lab results into real food at the market. Policymakers in developing countries can support efforts to educate farmers about new crop varieties and educate the general population about which mixes of foods are most healthy and how best to prepare them. Credits or subsidies for growing fruits, vegetables, and legumes and for converting from dung to new fuels would also help.

Reducing nutrient losses

The nutrient levels in processed foods can be greatly enhanced by changing milling and processing techniques. Instead of stripping away the nutritious bran and germ, higher-extraction and whole grain flours can be produced. Developing the technology is not a big problem. The real challenge is convincing consumers to accept and look for coarser products made with these ingredients.

Another obvious step is to use the micronutrient-rich byproducts from the milling and polishing of grains. Incredibly, in fuel-starved developing countries, these valuable byproducts are burned as fuel at the processing plant. Bran and germ can be used directly as food supplements for local people. Rice polishings could be, too, if an economical process can be found to stabilize the highly unsaturated lipids they contain. Certainly, all three of these substances could at least be better used as soil amendments. The key to reclaiming this wasted resource is providing low-cost fuel to producers. Creating food markets for the byproducts that would pay more than the cost of alternative fuels could be another strategy.

National governments and international agencies should make policies that support the research and development of cost-effective technology for converting the byproducts of milling and polishing into low-cost edible products that consumers will accept. Technology that will better preserve food after it is harvested and good storage methods for fruits and vegetables that are appropriate for poor households would increase the amount of nutritious foods people consume.

Certain foods promote or inhibit the micronutrients a person can absorb. For example, the compound phytate, contained in many staple seeds and grains, can bind with iron, zinc, and calcium in the intestine, preventing those nutrients from being absorbed. Recent evidence suggests that it may be possible to breed reduced-phytate varieties of soybean and corn, but there is concern about reducing seed vigor and crop productivity in the process. Genetic modification might result in seeds that are just as productive but do not contain excessive amounts of phytate. Another approach would be to develop food production processes, such as fermentation, that help decompose phytate.

The absorption of iron and zinc can be improved by eating foods during meals that contain ascorbic acid (vitamin C), which chemically changes nonabsorbable ferric iron into its more absorbable ferrous iron form and also promotes its absorption. Iron absorption can also be enhanced by eating meat at meals. The mechanism by which this occurs has not yet been fully explained; elucidating it might allow this characteristic to be incorporated into genetically modified plant foods.

A large part of the political solution to this problem is educating people about the right mix of foods to eat and how best to prepare them, including simple techniques such as soaking, which can reduce the amount of inhibitors such as phytate. The simplest advice is for a family to eat at least a modest amount of fruits, vegetables, legumes, and meats, because these are the best sources of micronutrients, and their presence in a meal enhances the body’s absorption of micronutrients from all sources. It is important to note that most poor people want to eat these foods; they just can’t afford them. Any policy that can improve a family’s ability to purchase these foods will help. Governments can also provide tools and resources so local families can grow their own modicum of fruits and vegetables and perhaps raise small livestock.

Changing institutions

The most productive work in improving the food system is inherently multidisciplinary, calling for new partnerships that ignore traditional subject, sectoral, and geopolitical boundaries such as those between the agricultural and health communities. Too often, the policies directing the efforts of government agencies, universities, and private institutions have not been based on holistic thinking. Funding and tenure tracks at universities favor specialized research in narrow fields and do little to reward interdisciplinary work.

Nutrition, health, and sustainable economic development must be viewed as instrumental to each other, and research programs must reflect that vision. A fine example is USDA’s Fund for Rural America, established in 1996, which supports work that promotes U.S. family farms, much of it interdisciplinary and intersectoral. This program should be expanded and its approach extended to other federal research organizations.

It will also be necessary to reorient our considerable federal research and outreach resources from their traditional focus on domestic agriculture to the larger, global food system. That such a reorientation is in our national interest was suggested by a 1994 report of the now-defunct congressional Office of Technology Assessment, which concluded that the increasing international demand for higher-value and value-added foods gives marketing advantages to our technologically advanced food industry. The same conclusion was reached two decades ago by a National Academy of Sciences study on world food and nutrition. If research can make our own food products more nutritious, there will be even greater demand for them overseas.

Changing our research outlook should start with USDA. Its in-house and extramural research programs must move beyond the narrow scope of its strictly domestic focus. Improving the health of people in other nations stabilizes other countries and governments, which helps the United States politically and creates better markets for U.S. products of all kinds. Besides, producing more nutritious foods is equally beneficial to U.S. citizens.

The structure of the entire world’s geopolitical approach to malnutrition, led by FAO, also should be changed. FAO has called for a crash program to greatly increase global food production, but it hinges on further massive increases in irrigation and the spreading of fertilizer-for the existing strains of staple crops. Although the world will need more food, this strategy has been proven faulty: It does nothing to improve micronutrient nutrition or balance the nutrient output of agricultural systems. And the strategy will not even have its intended effect, because without new breeds, production will not increase in the large areas of infertile land where many of the world’s poor live. It has been said that FAO’s plan is a recipe for prosperity in Kansas, not Kathmandu.

CGIAR remains similarly focused only on food production. Part of the problem is a lack of coordination with the 50 or so U.S. land-grant universities, which have a wealth of agricultural expertise. CGIAR does not even have access to lab work at these universities. In order to broaden its research base and programs, CGIAR should form stable partnerships with these schools. This would ultimately shift the nature of research toward improving nutrition. It would also shore up the agricultural research base in the United States, which the government has been steadily dismantling. The land-grant system was built in the 1800s to transform the knowledge and technology of farmers from local growing to a nationwide system serving the food needs of the nation. In this century, it was charged again with improving production efficiency, and it succeeded wildly, to the point where less than 2 percent of the population is involved in producing the entire country’s food supply. Today we face the next great challenge: bringing together food production, nutrition, consumer health, land use, and environmental concerns. We should once again mobilize the land-grant university system.

One model for such a collaboration is the Global Research on the Environmental and Agricultural Nexus (GREAN) initiative, which has been proposed to Congress by a consortium of U.S. land-grant universities. GREAN would revolutionize our approach to international food and agricultural development. Annual support of approximately $100 million would make this program productive.

Nongovernmental organizations such as the Rockefeller and Kellogg Foundations can also play important roles, if they adopt food system principles in their intervention efforts. Foundations should be encouraged to shift more resources to multidisciplinary programs that link local agricultural and public health resources. Community-based organizations, which are working increasingly at the village level in a broad range of activities, are ideally positioned to facilitate improvements in food systems. The federal government can take the lead in forming alliances between them and university and federal programs.

U.S. and international agricultural organizations need to alter their single minded devotion to boosting food output.

Training our best and brightest young people for interdisciplinary work will help. A food system approach, by its nature, calls for problem-based teams of experts who have different specialties and yet are not overly specialized. Developing such people at the undergraduate level means emphasizing experiential learning methods and dismantling the unnecessary divide between the biological and social sciences. At the graduate level, it requires replacing the traditional professor-as-mentor model with research team internships and other participatory experiences. New modes of teaching, including distance and modular education, should be developed to support lifelong learning in the rapidly changing agricultural, food, nutrition, and social sciences. Such an interdisciplinary environment will only become possible, though, when the reward structures at universities are changed.

The great paradox of the Green Revolution is that even though fewer people are starved of calories, billions of people remain starved of micronutrients. Malnutrition is a political failure and requires a political solution. But there will not be short-term fixes. Success requires vision and a long-term commitment by governments and institutions. Now is the time to begin, before the world’s growing population swamps our already struggling food system.

More nutritious foods will not only improve the health of population groups that are at risk for malnutrition but will also bolster the health of all people. Sustainable development cannot be achieved without a population that is better nourished and healthier, more vigorous, productive, and creative. In the food system concept, people are both the ends and the means.

A Jeffersonian Vision for Mapping the World

About 200 years ago, Thomas Jefferson sat down with a young military officer named Meriwether Lewis to plan an expedition to survey the broad expanse of territory between the then-frontier post of St. Louis and the northwest Pacific coast. Although competing English, Spanish, and French interests made geopolitics a compelling argument in favor of this enterprise (the Louisiana Purchase was not a done deal until the eve of Lewis’s journey in 1803), two other justifications were also used to convince a penny-pinching Congress to fund this bold venture.

The commercial benefits of such a expedition were, for one, easy to see. Whether the Louisiana Purchase was a silk purse or a sow’s ear (some Federalists would argue the latter for years to come) was academic after 1803. To exploit the natural resources of this vast new territory, it was essential to first map it. But Jefferson’s wide-ranging intellect saw other, less commercial yet equally valuable, benefits of Lewis’s explorations. Lewis’s ambitious mandate was not so much to map this wilderness as to understand it. In addition to wanting to know everything Lewis might discover about the region’s geology, soil, vegetation, and wildlife, Jefferson also wished to grasp its human geography-the economy, culture, and politics of the native American nations Lewis would encounter.

What Jefferson, the son of a cartographer, understood so clearly–that mapping has geopolitical, commercial, and scientific value–seems lost on many of today’s politicians. As a result of technological advances, we are in a position to make the same quantum leap in knowledge about the earth as Jefferson and Lewis and Clark made about the unexplored wilderness of the great Northwest. But although the challenge we face today is similar to that confronting explorers of the American West, the stakes have changed. Today, the international threat of unsustainable development has made an accurate “map” of the world an urgent need.

Technology such as satellite-based remote sensing (RS) imagery, geographic information systems (GIS), and the Defense Department’s global positioning system (GPS) place us at the threshold of a new “geospatial” era in mapping. Our concept of a map as a two-dimensional picture is no longer adequate to our needs. The irony is that although Jefferson understood how ignorant he and all nonnative Americans were of the land west of the Mississippi, today’s leaders believe that the world is already well mapped and thus known. In reality, it is a myth that the foundation of earth science data collection today is accurate, comprehensive, accessible, and consistent. What is true is that cutting-edge research, new mapping technologies, and satellites are giving us the unprecedented means to create a global foundation for future earth science research.

The nature of the problem we face is twofold. First, much of what is known about earth’s surface today remains inaccessible. Billions of dollars have been spent on the collection of earth science data that, for national security and other reasons, remain tightly restricted. And the data that are available are so poorly organized that they are as useless as a library without a filing system. Indeed, the moon is, in many respects, better mapped than earth is today.

Second, we need to redefine what we understand to be a map. The traditional concept of the map as a two-dimensional picture has served us well. But today a map also serves as a way of organizing data about earth’s surface. Computers have opened up a whole new way of collecting, organizing, analyzing, and disseminating information about earth’s surface. To label such mapping geospatial rather than simply geographic gives this kind of earth science data a discipline-neutral cast, and that is important if mapping is to serve as a means of interagency and multidisciplinary collaboration.

This new geospatial approach to organizing data about earth’s surface is based on three linked sets of remarkable technological breakthroughs. The first is high-resolution, multispectral, “real time” RS imagery that will come of age over the next several years with the launching of more and more commercial satellites. Soon, customers will be able to order pictures of their neighborhoods over the Internet. The second is the rapid commercial expansion of GPS, which permits increasingly accurate and inexpensive positioning of collected data at specific points on earth’s surface. GPS is already being used to monitor ecosystems, route delivery vehicles, and service utility infrastructures. The third is off-the-shelf GIS and desktop mapping software that permit personal computer-based analysis of geospatial data and their incorporation into sophisticated decisionmaking processes, with applications as diverse as business marketing and urban planning. But in order to take full advantage of the RS-GPS-GIS triad of technological advances, we must rethink the way we collectively manage geospatial information today, particularly in the international arena.

The whole world on my disk

Given the abundance of geospatial data that new technologies are making accessible, it is possible to create an infinite variety of useful maps for the future. But to take full advantage of the available data, a map must have geospatial accuracy, appropriate scale, and currency.

Geospatial accuracy means that locations, elevations, and distances on a map consistently reflect those on earth. The science of geodesy tackles the complex problem of mathematically pinning down points on earth’s irregular surface. These geodetic control points permit RS images to be properly positioned in reference to earth’s surface. The positional accuracy of data collected on the road or in the field-georeferencing-is increased by the use of portable GPS receivers. Using a GIS/GPS system, georeferenced data collected by field personnel can be automatically and consistently organized by location. This is particularly important when environmental or socioeconomic changes are being measured over time. The Census Bureau, for example, plans to have every U.S. residence georeferenced for the year 2000 census. With an accurate geospatial foundation, GIS software can “layer” data over a defined area (such as a census tract, a state, or a watershed), thereby making it relatively easy to retrieve, manipulate, and display in a map or graphic form. This built-in capability for accurate georeferencing could revolutionize earth science data collection.

Geospatial scale refers to the area a map attempts to depict. A large-scale map covers a small area and vice versa. The big National Geographic world map in my office, for example, is actually of a very small scale (1:19,620,000), with 1 inch on the map covering about 310 miles on earth. Appropriate scale is essential to geospatial databases because it determines the volume of data required and the types of problems those databases might help solve. Small-scale maps are useful as a global or regional reference and are already used to model such phenomena as global climate change. But there is also an urgent need for large-scale geospatial databases (1:50,000 and better) that can be used by local decisionmakers to help solve pressing development problems from soil erosion to public health to housing.

Geospatial currency is a function of timeliness and reliability and can exist only with constant updating and verification. For example, the world map hanging in my office was no doubt painstakingly researched, but it was made in 1988 and has the Soviet Union emblazoned across that large expanse at the upper right. The map’s outdated political geography is of little concern to me because neither my life nor my job depends on it. But up-to-date geospatial information is of great concern to military officers, real estate agents, and fire fighters. Geospatial information can play a decisive role in resolving international conflicts, as was demonstrated during the 1995 Dayton peace talks when digital maps of Bosnia helped negotiators draw lines of separation. Unfortunately, the vast majority of the world’s decisionmakers rely on outdated paper maps, if any at all, and have no way to obtain current, reliable, and relevant geospatial data.

If most of the world’s decisionmakers rely on outdated maps, they also fail to grasp the compelling need for geospatial data collection, production, and dissemination. At the national and international level, the historic linkage between maps and geopolitics, commerce, and natural resource management (so well understood by Jefferson) seems all but forgotten. What should open the eyes of decisionmakers to the need for better mapping is the growing awareness of the importance and difficulty of sustainable development.

The international threat of unsustainable development has made an accurate “map” of the world an urgent need.

Data for sustainable development

Faced with a growing population and demand for farmland across the Appalachian Mountains, the Continental Congress in 1785 appointed a geographer to oversee a land ordinance survey, which largely established the pattern of land development in the United States for the next century. This national priority to survey and clear new land was driven in large part by the grossly inefficient use of land that was already settled, which left many large landholders, including Jefferson, cash-poor.

Today’s world leaders, particularly those from impoverished countries, face even more daunting pressures because of rising demands from rapidly growing populations, but they do not enjoy the luxury of opening up significant new fertile lands for farming. Tragically, quite the opposite is the case. Each year, millions of hectares of prime agricultural land are significantly degraded or converted to other land uses. Unlike Jefferson, today’s leaders do not need to map new lands but to collect geospatial information about lands already occupied and to ensure that such information is accessible and useful to those making critical decisions about their use. Internationally, we face the urgent task of maximizing agricultural productivity while conserving eroded soils and depleted water supplies. This cannot be done adequately without ecosystem-specific geospatial information.

Although sustainable development has become a mantra for agencies engaged in foreign assistance, and despite their endorsements and ad hoc funding for myriad GIS-related field projects, no development and lending agencies are now fully committed to developing a comprehensive geospatial data management system. Although all engage in voluminous geospatial data collection (there seems to be a development project in every village), that worldwide knowledge base is for all intents and purposes inaccessible and thus largely worthless to future sustainable development decisionmakers who will need it the most.

Because they have had more important business than “making maps,” these national and international agencies inadvertently foster a collective blind spot that prevents them from taking advantage of geospatial technologies and ensuring that their economic development policies and investments are based on the best information available. Many large and small companies in the private sector have already embraced the use of GIS-based analysis in corporate management decisions. Foreign assistance agencies should follow suit, and quickly.

A model for a worldwide geospatial master plan already exists in the U.S. National Spatial Data Infrastructure (NSDI). Not exactly a household name and still far from being realized, the NSDI is a plan first outlined by the National Research Council’s Mapping Sciences Committee. Under an NSDI, all government agencies that collect geospatial data, from the local planning commission to NASA, would do so in a way that facilitates data integration and sharing. Government-funded geospatial data collection, which covers almost all departments at all levels of government, should also meet metadata standards that cover both “foundation” data (such as georeferenced RS imagery, geodetic controls, and elevations) and certain “framework” data layers (such as transportation routes, hydrology, and political boundaries). Establishing such basic geospatial data standards has been difficult until now because agencies have been free to collect data in any way they choose. This has resulted in poor and inconsistent recordkeeping, redundant data collection efforts, and many wasted tax dollars. Inherent in an NSDI is recognition by participating agencies that they need to change the way they do business, individually and collectively, if they are to create a more effective, efficient, and transparent public service.

In addition to standards-setting work, the NSDI plan addresses the capacity of agencies to collect and use geospatial data; it even suggests the creation of a “national spatial data clearinghouse.” The Federal Geographic Data Committee, chaired by the Secretary of the Interior, helps implement the NSDI by coordinating geospatial data standards among federal agencies and by promoting these standards among state and local agencies. Thus far, implementation of the NSDI at the state and local levels has been uneven, but 16 states, from Alaska to North Carolina, have already set up specific organizations dedicated to improving geographic information.

The merits of the NSDI concept of sharing geospatial data were proved after the disastrous 1993 floods in the Upper Mississippi River Basin. In response to the flooding, federal, state, and local agencies cooperated in establishing a Scientific Assessment and Strategy Team that advised officials in the many affected states on such critical issues as insurance, relocation sites, habitat restoration, and land use planning. The United States is not alone; other governments have also begun implementing their own NSDIs-not because it is trendy, but because they see it as a prudent public investment, like roads and telephone lines.

Why not the world?

Talk of a Global Spatial Data Infrastructure (GSDI) may be premature given the infancy of various NSDI efforts and the generally weak mandates of the international agencies that would work with a variety of international, national, and other organizations to guide its establishment. Nonetheless, it needs to begin in earnest. A future GSDI may be different from any current NSDI, but the underlying purpose should be the same: a rational basis for collecting, organizing, and disseminating useful geographic information, particularly that which might assist in a wide range of sustainable development decisionmaking.

The idea that comprehensive geospatial data collection and dissemination transcend national boundaries has already been embraced by U.S. federal agencies, and the newly created National Imagery and Mapping Agency (NIMA) even has an explicit global geospatial mandate. But NIMA’s mission is primarily focused on U.S. military and intelligence applications, not sustainable development. The one place within the U.S. government that is already building key components of a GSDI focused on sustainable development is the U.S. Geological Survey’s Earth Resources Observation Systems Data Center in Sioux Falls, South Dakota, where pioneering work is under way in compiling NASA-generated satellite imagery in worldwide digital terrain, land cover, and watershed maps.

Additional recent global geospatial data collection efforts include other U.S. agencies, universities, international organizations, and foreign governments. These collaborations are starting to produce impressive results. One exciting new project, co-managed by the International Irrigation Management Institute in Sri Lanka and Utah State University, is developing an electronic world atlas of water and climate data for agricultural research. Another, under the auspices of the UN Environment Programme and government research institutes in the United States, Japan, Mexico, and New Zealand, has completed a globally consistent digital topographic database at the 1:1 million scale. These, though, are still nascent efforts that require substantial long-term support and are at a scale that serves global needs better than they do local needs.

Although efforts to create an international map of the world (at the 1:1,000,000 scale) date back to 1891, the first tentative steps toward a GSDI have only just begun. Joel Morrison, chief of the Census Bureau’s Geography Division, laid out a clear GSDI proposal in 1994. In November 1994, a UN-sponsored symposium of earth scientists in Bangkok underscored the urgent global need for developing and maintaining 10 core sustainable development data sets: land use/land cover, demographics, hydrology, infrastructure, climatology, topography, economy, soils, air quality, and water quality. In 1995, 80 representatives from the private and public sectors worked together on the EARTHMAP Design Study and Implementation Plan, which proposed a strategy to advance the use of geospatial data and tools for sustainable development decisionmaking. In December 1995, a World Bank GIS task force also came out with specific recommendations for institutional use of geographic information to improve the bank’s investments and management.

Without responsible government led guidance, the potential benefits of useful geographic information can easily be frittered away.

Two explicit GSDI efforts are now gaining momentum. The first involves a group of European and North American GIS experts, who held a meeting in Bonn, Germany, in the fall of 1996 to discuss a GSDI framework that focused on geospatial data standards as a means to promote such basic economic requirements as land tenure. On the other side of the world, a complementary effort was already under way. In 1994, Japan’s Ministry of Construction hosted a roundtable conference on global mapping. The ministry now serves as the secretariat for an International Steering Committee for Global Mapping, which is composed of directors of national mapping organizations, among others. In October 1996, the committee issued its “Santa Barbara Statement,” which calls for linking global mapping with international environmental measures, and submitted it to a UN General Assembly Special Session. Both the “Atlantic” and “Pacific” groups have explicitly endorsed a GSDI as a critical step toward a sustainable future. International fora and accords, though, do not make a GSDI. Implementation does. The nuts and bolts of creating a GSDI might be jump-started by a planned NIMA-NASA shuttle mission scheduled for 1999 that will for the first time collect a comprehensive and accurate digital terrain data set for most of the inhabited world.

The United States should lead in the establishment of a GSDI for the same reasons Jefferson sent Lewis to map the wilderness. The first, geopolitics, is more compelling now than ever, with many countries desperately tackling the persistent threat of unsustainable development. It will be manifested differently in each area, but many countries will share in the common human tragedy of violence if agriculture-based economies collapse. Failure to improve natural resource management in vulnerable regions will result in heightened ethnic tensions, political instability, and forced migrations.

A GSDI by no means solves the dilemma of unsustainability, but it does offer an objective framework for environmental accounting and a common basis for subnational, bilateral, or multilateral dialogue over resource development options. If geospatial data and tools become more widely available and affordable, the democratization of decisionmaking, particularly at the local level, can be strengthened. Conversely, unfair restrictions or misuse of these same data and tools could serve repressive regimes. Participating governments in a GSDI will bring their own concerns (national security, proprietary data, and so on) and agendas (who will set the standards? how will they be enforced?) to the table.

Standards are not preordained; they are negotiated compromises. But once established, they need to be firmly and consistently applied. A GSDI, then, will not be an altruistic exercise but a geopolitical vehicle toward better earth science data collection-the foundation for improved international understanding of environmental scarcity (such as water shortages) and ultimately for economically sound collaborative work on solutions.

The second GSDI effort gaining momentum is notable on both the supply and demand fronts. Geospatial tools and data are already a multibillion-dollar industry; a sound geospatial framework will help that industry (and the many taxpaying companies that helped build it) continue to thrive. But a geospatial marketplace, particularly in developing countries, will be fully realized only if data are current, standardized, reliable, and accessible, which are reasonable GSDI objectives. People will invest in geospatial data and tools that help them solve problems. A GSDI enables this investment through the setting of technology and data standards. Effective use of geospatial data and tools will likely yield a significant economic multiplier as it dovetails with a broader Internet-based global information infrastructure. But the RS-GIS-GPS triad of technologies provides only a potential economic development opportunity.

Without responsible government-led guidance, the potential benefits of useful geographic information can easily be frittered away, benefiting only that minority of the world’s population that is well off and thus needs it least. At the same time, a GSDI is not a Big Brother mechanism for interfering in an expanding geospatial marketplace. It is a cooperative means to lay down some of the rules for geospatial data quality and transactions and to help educate new users of this type of information. A GSDI would foster the creation of a coherent sustainable development knowledge base by building the foundation for a virtual marketplace of georeferenced data collection and analysis. The indirect benefit, of course, is that if a GSDI can improve sustainable development decisionmaking, economies will grow and living standards will rise.

The third linked reason, geography, is based on the substance of a GSDI: that is, “useful” information. Lewis did not collect geographic data for its own sake but because it served a national interest. We now have a very real international interest in relevant sustainable development information. We already have the ability to measure the effects of rapid resource degradation and depletion on living conditions for tens of millions of people. We can thus no longer afford to ignore earth science and socioeconomic data that would improve sustainable development decisionmaking for future generations. A GSDI, though global in scope, would focus on making useful geographic information accessible to local communities. In the near term it could, for example, support Vice President Gore’s Global Learning and Observations to Benefit the Environment initiative to enhance sharing of environmental information among schoolchildren around the world. In the October 1994 issue of Scientific American, geographer Robert Kates argues for a coherent sustainability strategy to “manage the transition to a warmer, more crowded, more connected but more diverse world.” A GSDI should be an integral part of that long-term international strategy.

Jefferson would have immediately grasped the imperative of such an ambitious global mapping mission. In his book, Undaunted Courage–Meriwether Lewis, Thomas Jefferson, and the Opening of the American West, Stephen Ambrose quotes Jefferson, whose vision could well apply today to a GSDI: “The work we are doing is, I trust, done for posterity, in such a way that they need not repeat it. . . . We shall delineate with correctness the great arteries of this great country; those who come after us will extend the ramifications as they become acquainted with them, and fill up the canvas we begin.”

Our canvas now is the world. A GSDI and associated geospatial data and tools could provide the palette and brushes, relevant geographic information would yield the colors, and our imaginations and hard work could paint a landscape our grandchildren would want to inherit.

Social Change and Science Policy

One can almost hear the collective sigh of relief coming from the federally funded science community. Only a year ago, analysts were forecasting 20 to 30 percent cuts in funding for nondefense R&D as part of the congressional plan to balance the federal budget by the year 2002. But this year’s budget scenario suggests that a 10 percent reduction over the next five years may be closer to the mark, as continued economic growth enhances the federal revenue picture. Even better news may come from bipartisan political support for R&D in Congress. Senator Phil Gramm (R-Tex.) has introduced a bill calling for a doubling of federal funds for “basic science and medical research” over the next decade, and his ideological antithesis, Rep. George E. Brown, Jr., (D-Calif.) has developed a budget-balancing plan that provides 5 percent annual increases for R&D. Although few would deny that the post-World War II era of rapidly rising federal R&D expenditures has come to an end, current trends seem to imply that the worst fears of science-watchers were vastly overstated. As recently reported in Science: “After two years of uncertainty, the White House and Congress seem to be moving toward stable funding for science and technology.”

Even in the face of such relatively good news, the R&D enterprise is not well served by complacency. Continued exponential growth of federal entitlement programs, if left unchecked, will threaten the budgetary picture for R&D and other discretionary programs for years to come. But such fiscal considerations are only one element of a national context for science and technology that has changed radically in the 1990s and will likely continue to change well into the next century. Successful response to this evolving context may require a fundamental rethinking of federal R&D policy. Failure to respond could lead to a devastating loss of public support for research.

What are the essential components of the new context for federally funded S&T? Here I focus on three emerging social trends whose potential implications are neither sufficiently acknowledged nor adequately understood.

Interest-group politics. From AIDS activists to environmentalists, from antiabortion advocates to animal rights organizations, interest groups composed largely of nonscientists increasingly seek to influence the federal research agenda. This trend is not surprising: As science and technology have become increasingly integral to the fabric of daily life, it is natural to expect that the populace will seek a correspondingly stronger voice in setting R&D policies.

Scientists, of course, may view such activism as a threat to the integrity and vitality of science. But the standard argument that only scientists are qualified to determine appropriate priorities and directions for research is intrinsically self-serving and thus politically unconvincing. Moreover, there is ample evidence that when scientists work cooperatively with knowledgeable activists from outside the research community, science as well as society can benefit. Increased sensitivity about the ethics of animal experimentation, reduced gender bias in clinical trials for non-sex-specific diseases, changing protocols for clinical trials involving AIDS sufferers, and evolving priorities in environmental and biomedical research all reflect the input of groups that were motivated by societal, rather than scientific, interests. Science has changed from this input but it has not suffered. More of such change is inevitable, as exemplified by the success of recent lawsuits brought against the National Academy of Sciences by outside groups seeking to provide input into academy studies.

Societal alienation. In an affluent nation such as the United States, the promise of continual societal progress fueled by more scientific and technological progress will become harder to fulfill, simply because the basic human needs of most people have been met, and the idea of progress increasingly derives from aspirations and satisfactions that are intangible, subjective, and culturally defined. At the very least, the direct contribution of science and technology to the general quality of life in affluent societies may have reached a state of diminishing returns. The promise that more science will lead to more societal benefits may increasingly be at odds with the experience of individuals who find their lives changing in ways they cannot control and in directions they do not desire. For example, continued innovation in information and communication technologies fuels economic growth and creates many conveniences, but it also undermines traditional community institutions and relationships that may be crucial to the welfare of the nation. The resulting disaffection can fuel social movements that are antagonistic to science and technology.

Scientists commonly misinterpret the origins of this public antagonism. Determined opposition to technologies such as nuclear power is often portrayed as nothing more than a reflection of inadequate public understanding of science, coupled with irrational attitudes about risk or technological change. But public opposition may also reflect a rational desire for more democratic control over technologies and institutions that profoundly influence daily life. The recent news of the successful cloning of a sheep portends an acceleration of this sort of tension. Such issues are debated primarily in terms of ethics and values-realms in which scientists have no special standing. The idea that greater scientific literacy among the public will reduce conflict is almost certainly incorrect: Survey results from Europe show that the nations with the highest rates of scientific literacy also display the highest degree of skepticism about the benefits of science and technology and the judgment of scientists.

Socioeconomic inequities. The distribution of wealth in the United States has grown increasingly inequitable over the past two decades. Income disparity between the top and bottom 10 percent of households almost doubled during this period. Family incomes for the lower half of the economic spectrum may actually have declined in the 1980s, whereas incomes for the top 1 percent of families increased by more than 60 percent. Such income disparities translate into inequity of opportunity for education, employment, health, and environmental quality.

At the same time, the transition of the U.S. economy from industrial to postindustrial has been fueled, in no small part, by the scientific and technological advances of the information age. Indeed, the federal investment in R&D is typically justified by scientists and policymakers alike as a crucial component of economic growth. Yet, for a significant portion of the population-those with declining incomes, those who have lost good jobs in the manufacturing sector, and those who graduate from high school or even college with their employment options limited to poorly paid service-sector jobs-the economic and social realities of technology-led growth may not translate into progress. Furthermore, in a free-market society, the problem-solving capacity of science and technology will preferentially serve those who already have a high standard of living, because that is the source of the market demand that stimulates research and innovation. Thus, unless the trend toward increased socioeconomic inequity is successfully redressed, large segments of the populace may eventually realize that they have not benefited and will not benefit from the national investment in science and technology.

As an example, consider that the life expectancy of the average African American is about six years less than that of the average Caucasian. In fact, the life expectancy of African Americans in the 1980s actually declined, the first such decline in this century. African Americans, as a community, might therefore reasonably conclude that a biomedical R&D effort that focuses on diseases of old age and affluence does not serve their public health needs. Such a conclusion could stimulate political action that supports different biomedical R&D priorities or supports a shifting of funds from R&D to other programs.

Obviously, these three social trends are the result of complex social, economic, and political factors of which R&D is just one component. But scientific and technological progress is in fact implicated in these trends because such progress is a prime catalyst for change in the postindustrial world. From the effect of information technologies on the structure of labor markets and communities to the impact of biomedical research on the cost and ethics of health care, the products of R&D influence people’s lives in complex, profound, and irreversible ways that are not always positive or equitable.

A unifying theme that is beginning to emerge from this rapidly evolving social context is that of democratic control over science and technology. Socioeconomic inequity reflects the inability of significant segments of the population to appropriate the benefits of the public investment in R&D. Alienation reflects the inability of individuals and groups to control the impacts of R&D on their lives. In both cases, the democratic process, from local protests to lawsuits to legislation, will be a natural avenue for change. The recent successes of interest groups in the R&D arena demonstrate that change is possible and presage an expansion of such activity in the future.

What’s a scientist to do?

How can the R&D community respond productively to a social context that demands a more democratically responsive science and technology policy? Unfortunately, the responsiveness of the community is compromised by a policy debate whose terms are largely the same as they were in the earliest days of the Cold War. This dogma is rooted, of course, in Vannevar Bush’s famous 1945 report Science, the Endless Frontier. Integral to Bush’s argument is the idea that scientific progress leads inevitably and automatically to social progress. According to this view, the magnitude of scientific progress is the crucial metric of success, because all such progress must ultimately contribute to societal well-being. The incentive and reward system for science in turn is based on this metric. Democratic input into the system is both unnecessary and counterproductive because it cannot improve on the ability of the scientific community to maximize its own productivity.

The idea of a more “democratized” R&D policy understandably generates fear and resistance among scientists who recognize that history is littered with failed and immoral attempts to exert political control over the direction of science. But to equate a more democratically responsive R&D system to Stalinist Lysenkoism or Nazi science is to turn the concept of democracy on its head. Increased democratic input into R&D policy decisions can in fact empower science by creating stronger linkages between research goals and societal goals, linkages that can ensure strong public support well into the future.

The recent rise of special interest groups seeking to influence R&D policy points out both the dangers and the promise of the trend toward democratization. The danger is that the interest groups with the most political and economic power will come to dominate the R&D agenda, perhaps exacerbating the problems of alienation and inequity that can undermine support for publicly funded science and technology. The promise is that the legitimate successes of such groups can help us understand how to design and develop new institutional arrangements for cultivating a more democratically responsive R&D enterprise. Such successes demonstrate not only that an informed public can productively contribute to science policy discourse but also that such contributions can create mutual understanding among scientists and the public, constructively influence the conduct of science in response to evolving ethical norms, and modify the direction of science so that it can better address societal goals and priorities.

How can such outcomes be encouraged? Policies that foster receptiveness to change within the R&D community are crucial. Institutional incentives and goals for research must broaden. Considering the huge financial pressures now faced by universities and government laboratories, the exploration of alternative missions should be viewed as an essential survival strategy. What if public service were rewarded as strongly as a number of publications or patents? If helping a community or an organization to address a technical issue or problem was a criterion for promotion, peer approval would follow. It is hard to imagine that such a change would lessen public support for R&D. Moreover, positive feedback between social needs and the research agenda would begin to evolve at a grassroots level.

On scales ranging from national to local, legitimate mechanisms must be created for enhancing public participation in the process of defining, prioritizing, and directing R&D goals and activities. The congressional authorization and appropriations process is not such a mechanism at present because most input is provided by scientists and research administrators. The efforts of special interest groups to influence this process is a step in the right direction but may lead to distortions of its own. In 1992, the Carnegie Commission on Science, Technology, and Government recommended the creation of a National Forum on Science and Technology goals. The forum was envisioned as a venue for public participation in the definition of national R&D goals and as a mechanism for incorporating public opinion into the science and technology policymaking process. More recently, the commission recommended the creation of Natural Resource Science Forums to bring scientists together with stakeholders trying to resolve environmental disputes. Both ideas deserve further development.

Numerous European nations are experimenting with ways to more fully involve the public in the science and technology policy process. In Sweden, Denmark,and Norway, considerable progress has been made in linking workers and managers in the manufacturing sector with university scientists to help design innovation paths that benefit both workers and corporations. In the Netherlands, every university has an outreach program aimed at responding to the noncommercial technological problems of local communities. Denmark, the Netherlands, the United Kingdom, and Norway have organized citizen conferences to address controversial aspects of biotechnology R&D, as well as other issues ranging from human infertility to telecommuting. Nascent efforts along these lines in the United States, such as those recently launched by the nonprofit Loka Institute, deserve the strong support and cooperation of U.S. scientists.

The very success of modern science and technology-the capacity to transform every aspect of existence and every institution of society-brings R&D policy inextricably into the realm of democracy. Resistance to the democratizing trend will likely be futile and counterproductive. The challenge facing policymakers and scientists is to embrace this changing social context in a way that strengthens our R&D effort.

Biological Invasions: A Growing Threat

To the untrained eye, Everglades National Park and nearby protected areas in Florida appear wild and natural. Yet within such public lands, foreign plant and animal species are rapidly degrading these unique ecosystems. Invasive exotic species destroy ecosystems as surely as chemical pollution or human population growth with associated development.

In July 1996, the United Nations Conference on Alien Species identified invasive species as a serious global threat to biological diversity. Then in April 1997, more than 500 scientists called for the formation of a presidential commission to recommend new strategies to prevent and manage invasions by harmful exotic species in the United States.

Already, many states attempt to maintain their biological heritage, and a number of state and federal regulations restrict harmful species. Unfortunately, for a variety of reasons, such tactics have failed. Without greatly increased awareness and coordinated efforts, the devastating damages will continue.

Exotic species have contributed to the decline of 42 percent of U.S. endangered and threatened species. At least 3 of the 24 known extinctions of species listed under the Endangered Species Act were wholly or partially caused by hybridization between closely related exotic and native species. After habitat destruction, introduced species are the second greatest cause of species endangerment and decline worldwide-far exceeding all forms of harvest. As Harvard University biologist E. O. Wilson put it, “Extinction by habitat destruction is like death in an automobile accident: easy to see and assess. Extinction by the invasion of exotic species is like death by disease: gradual, insidious, requiring scientific methods to diagnose.”

The cost of inaction

According to a 1993 report by the (now defunct) congressional Office of Technology Assessment (OTA), lack of legislative and public concern about the harm these invasions cause costs the United States hundreds of millions, if not billions, of dollars annually. This includes higher agricultural prices, loss of recreational use of public lands and waterways, and even major human health consequences. About a fourth of U.S. agricultural gross national product is lost to foreign plant invaders and the costs of controlling them. For example, leafy spurge, an unpalatable European plant invading Western rangelands, caused losses of $110 million in 1990. Such losses are likely to increase. Foreign weeds spread on Bureau of Land Management lands at over 2,300 acres per day and on all Western public lands at twice that rate.

Other effects on private land are more obvious. The spread of fire-adapted exotic plants that burn easily increases the frequency and severity of fires, to the detriment of property, human safety, and native flora and fauna. In 1991, in the hills overlooking Oakland and Berkeley, California, a 1,700-acre fire propagated by Eucalyptus trees planted early in this century destroyed 3,400 houses and killed 23 people.

Over the past two centuries, human population growth has substantially altered waterways and what remains of the natural landscape. Once contiguous across the entire United States, wetland and upland ecosystems are often mere remnants that are now being degraded and diminished by nonindigenous species invasions. This exacerbates the problem of conserving what remains of our country’s biological heritage.

At the same time, nonindigenous crops and livestock, including soybeans, wheat, and cattle, form the foundation of U.S. agriculture, and other exotic species play key roles in the pet and nursery industries and in biological control efforts. Classifying a species as beneficial or harmful is not always simple; some are both. For example, many imported ornamental plants are used in manicured landscapes around our homes. On the other hand, about 10 percent of these same species have escaped human cultivation, some with devastating ecological or economic results.

Scientists wake up

Until the past decade or so, conservationists were often complacent about nonindigenous species. Many shared the views of Charles Elton in his 1958 book The Ecology of Invasions of Plants and Animals, which introduced generations of biologists to invasion problems. He contended that disturbed habitats, because they have fewer or less vigorous species, pose less “biotic resistance” to new arrivals. Conservationists now realize that nonindigenous invaders threaten even species-rich pristine habitats. The rapidly increasing conservation and economic problems generated by these invasions have resulted in an explosion of interest and concern among scientists.

In the United States, invasive plants that constitute new habitats and dramatically alter a landscape or water body have some of the greatest impacts on ecosystems. On land, this could be the production of a forest where none had existed before. For example, sawgrass dominates large regions of Florida Conservation Area marshes, providing habitat for unique Everglades wildlife. Although sawgrass may be more than 9 feet tall, introduced Australian melaleuca trees are typically 70 feet tall and outcompete marsh plants for sunlight. As melaleuca trees invade and form dense monospecific stands, soil elevations increase because of undecomposed leaf litter that forms tree islands and inhibits normal water flow. Wildlife associated with sawgrass marshes declines. The frequency and intensity of fires change, as do other critical ecosystem processes. The spread of melaleuca and other invasive exotic plants in southern Florida could undermine the $1.5-billion effort to return the Everglades to a more natural state.

Throughout the world, such invasions threaten biodiversity. In Australia, invasion by Scotch broom led to the disappearance of a diverse set of native reptiles and to major alteration of the composition of bird species. On the island of Hawaii, the tall Atlantic shrub Myrica faya has invaded young, nitrogen-poor lava flows and ash deposits on the slopes of Mauna Loa and Mauna Kea. Because it fixes nitrogen, it inhibits colonization by native plants, favoring other exotic species.

Plant communities offering little forage value ultimately lower wildlife abundance or alter species composition. Invading plant species often exclude entire suites of native plants but are themselves unpalatable to native insects and other animals. Two Eurasian plants-spotted knapweed, which infests 7 million acres in nine states and two Canadian provinces; and leafy spurge, which occupies 1.8 million acres in Montana and North Dakota alone-provide poor forage for elk and deer. Likewise in Florida, the prickly tropical soda apple from Brazil and Argentina excludes native palatable species. Losses to the local cattle industry are over $10 million per year, or about 1 percent of gross revenues.

Bird, reptile, and amphibian invasions may also devastate individual native species but generally do not cause as much damage as exotic plants. Herbivorous mammals and insects are often far more troublesome. In the Great Smoky Mountains National Park, feral pigs descended from a few that escaped from hunting enclosures in 1920 devastated local plant communities by selectively feeding on plants with starchy bulbs, tubers, and rhizomes and by greatly changing soil characteristics. In parts of the southern Appalachians, two related insects, the hemlock woolly adelgid and the balsam woolly adelgid, defoliate and kill dominant native trees over vast tracts. Host trees have not evolved genetic resistance, and native predators and parasites of the insects are ineffective at slowing their advance.

The zebra mussel from the former Soviet Union has clogged the water pipes of many electric companies and other industries, particularly in midwestern and mid-Atlantic states. It also threatens the existence of many endemic native bivalve molluscs in the Mississippi Basin. Infestations in the midwest and northeast cost power plants and industrial facilities nearly $70 million between 1989 and 1995.

Death by disease

Introduced animal populations can also harm their native counterparts by competing with them, preying on them, and propagating diseases. For example, a battery of introduced Asian songbirds are host to avian pox and avian malaria in the Hawaiian Islands; native birds are especially susceptible. Introduced species can also gradually replace native species by mating with them, leading to a sort of genetic extinction.

Pathogens are among the most damaging invaders. Plant pathogens can change an entire ecosystem just as an introduced plant can. The chestnut blight fungus, which arrived in New York City in the late 19th century from Asia, spread in less than 50 years over 225 million acres of the eastern United States, destroying virtually every chestnut tree. Because chestnut had comprised a quarter or more of the canopy of tall trees in many forests, the effects on the entire ecosystem were staggering, although not always obvious. Several insect species restricted to chestnut are now extinct or endangered.

After habitat destruction introduced species are the second greatest cause of species endangerment and decline worldwide.

We have no precise figures on the enormous costs of introduced pathogens and parasites to the health of humans and of economically important species. One such invader is the Asian tiger mosquito, introduced from Japan in the mid-1980s and now spreading in many regions, breeding in stagnant water left in discarded tires and backyard items. It attacks more hosts than any other mosquito, including many mammals, birds, and reptiles. It is a vector for various forms of encephalitis, including the La Crosse variety, which infects chipmunks and squirrels, and the human diseases yellow fever and dengue fever.

Almost every ecosystem in the United States contains nonindigenous flora and fauna. Particularly hard hit are Hawaii and Florida because of their geographic location, mild climate, and reliance on tourism and international trade. In Florida, about 25 percent of plant and animal groups were introduced by humans in the past 300 years, and millions of acres of land and water are infested by invaders. In Hawaii, about 45 percent of plant species and 25 to 100 percent of species in various animal groups are introduced. As a result, all parts of the Hawaiian Islands except the upper slopes of mountains and a few protected tracts of lowland forest are dominated by introduced species.

In western states, invasions have harmed native plant diversity and the production capability of grazing lands. Although the percentage of introduced species in California is not as high as in Florida and Hawaii, large portions of the state, including grasslands and many dune systems, are dominated by exotic plants, and exotic fishes threaten many aquatic habitats. All regions of the United States are under assault.

Damage by exotic species is often best documented on public lands and waterways because taxpayers’ dollars are used for management. However, the problem is at least as pronounced on private properties. The Nature Conservancy, which operates the largest private U.S. reserve system, views nonindigenous plants and animals as the greatest threats to the species and communities its reserves protect. It can ill afford the increasing time and resources that introduced-species problems cost, and the progress it makes on its own properties is almost always threatened by reinvasion from surrounding lands.

Federal failure

The 1993 OTA report concluded that the federal framework is largely an uncoordinated patchwork of laws, regulations, policies, and programs and, in general, does not solve the problems at hand. Federal programs include restricting entry of harmful species, limiting their movement among states, and controlling or eradicating introduced species

Most of the federal money goes toward efforts to keep foreign species out of the United States. The U.S. Department of Agriculture (USDA) spent at least $100 million in FY 1992 for agricultural quarantine and port inspection. However, most of this effort is aimed at preventing the introduction of agricultural diseases and disease vectors. Moreover, federal efforts to prevent introduction fail because entry is denied only after a species is established or known to cause economic or environmental damage elsewhere.

The Federal Noxious Weed Act of 1974 and the Lacey Act of 1900-the two major laws that restrict entry of nonindigenous species-use blacklists. That is, they permit a species to be imported until it is declared undesirable. Excluding a plant species requires its addition to the Federal Noxious Weed list, a time-consuming process with no guarantee of success. It took more than five years to list the Australian melaleuca tree, and that happened only with the support of the entire Florida congressional delegation. At least 250 weeds meeting the Federal Noxious Weed Act’s definition of a noxious weed remain unlisted. In addition, USDA’s Animal and Plant Health Inspection Service (APHIS) simply failed to act on listings for years, wishing to avoid controversy and research effort. Now there is interest within APHIS in listing noxious weeds, but the agency lacks the necessary staff and funds to conduct the risk assessments needed to justify a listing.

In 1973, a “white” or “clean” list approach was proposed for the Lacey Act. Importing a species would be legal only if it posed a low risk. However, in 1976, the U.S. Department of the Interior abandoned the plan under pressure from pet-trade enthusiasts and parts of the scientific community. The pet trade did not want to assume the burden of demonstrating harmlessness and particularly feared loss of income from new tropical fish. Some scientists thought the approach might exclude certain zoo and research animals even though the proposal specifically allowed permits for scientific, educational, or medical purposes.

Listing a species on a black or white list can also be scientifically challenging. If a suspected harmful species has not received the necessary taxonomic research to distinguish it from closely related species, especially native ones, the process can be difficult at best. Overall, the Lacey and Federal Noxious Weed acts fail to prevent the interstate shipment of listed species and are only marginally effective in preventing new invasions.

Because Americans demand new exotic plants and animals for aquariums, homes, gardens, and cultivated landscapes, the pet and ornamental plant industries wield enormous political influence at federal and state levels. A 1977 executive order issued by President Carter instructed all federal agencies to restrict introductions of exotic species into U.S. ecosystems and to encourage state and local governments, along with private citizens, to prevent such introductions. The U.S. Fish and Wildlife Service was to lead in drafting federal regulations. When attempts to implement this order met with strong opposition from agriculture, the pet trade, and other special interest groups, the formal regulatory effort was largely abandoned.

Even when states take the lead in attempting to prohibit harmful exotic species, special interest groups have effectively undermined this effort. Recently, the pet industry essentially blackmailed the Colorado Division of Wildlife into grandfathering an extensive list of exotic species from future regulations. The threat was legislative action that could strip the division of its authority, such as shifting its function to the Colorado Department of Agriculture.

Because of the political power of vested interests, federal and most state agencies use blacklists and do not demand that importers of plants and animals demonstrate that an introduction will prove innocuous. White lists are also problematic because it is extremely difficult to determine if a species will become invasive in any given locale. The precise reasons why some species become invasive and disruptive are usually unknown. Occasionally, there is a long time lag between introduction and when a species becomes troublesome. Brazilian pepper, for example, introduced during the 19th century, became noticeable in south and central Florida only in the early 1960s, but it is now a widespread scourge. Long time lags may be related to factors such as unnoticed population growth, with some sites acting as staging areas for long periods of time; habitat change, rendering waterways and landscapes more prone to invasions; and even genetic mutations, adapting a species to previously inimical local conditions. Synergism between species can also account for long time lags. Several fig species imported as landscape ornamentals into southern Florida during the 1920s have now become invasive because their host-specific fig wasps have independently emigrated, and their seeds are dispersed by introduced parrots.

Worse, many state and federal agencies are schizophrenic about exotic species. Not only do they have control programs aimed at harmful invaders, they also actively promote the import and spread of potentially invasive exotic species, while giving the potential long-term consequences only minimal consideration. Probably the best example of agency promotion of potentially harmful exotic species is USDA’s Natural Resources Conservation Service, formerly the U.S. Soil Conservation Service, which has a policy of introducing nonindigenous plant species suitable for erosion control. During the 1930s, the agency distributed approximately 85 million kudzu seedlings to southern landowners for land revitalization. By the 1950s, kudzu was a nuisance species, and by 1991, it infested almost 7 million acres in the region. After this disaster, the agency modified its policy and now provides general guidance to its 20 U.S. plant-material centers on testing species for toxicity and for their propensity to become agricultural pests. Still, current review processes fail to screen out potential environmental pests. At least 7 of the 22 nonindigenous plant species released between 1980 and 1990 had invasion potential.

Even when invasive exotic species are federally listed and found in the United States, federal control efforts are often virtually nonexistent. For example, for FY 1998 APHIS has a budget of only $408,000 ($325,000 after overhead and administrative costs) for survey and control efforts for 45 noxious weed species. Similarly, the National Park Service has only $2 million to remove invasive species from its parks this year, despite $20 million in management needs identified by its biologists. Federal agencies’ failure to manage harmful species on their lands can have long-term impacts on abutting state, local, and private lands and can undermine state programs to manage invaders.

About a fourth of U.S. agricultural GNP is lost to foreign plant invaders and the cost of controlling them.

Eradicate or control?

Because invasive species do not respect jurisdictional boundary lines, efforts to eradicate or limit them usually require an enormous degree of cooperation among federal, state, and local government agencies as well as the participation of private interests and broad public support. Eradication of plants, insects, and other vertebrate and invertebrate animals is often feasible, particularly early in an invasion. For example, the Asian citrus blackfly was found on Key West, Florida, in 1934 and was restricted to the island during a successful $200,000, three-year eradication effort. The insularity of Key West was a crucial factor in preventing the fly’s rapid spread. However, in 1976, this same species was discovered in a much larger area centered in Fort Lauderdale. This time eradication did not work; the area infested was too large, and low-level infestations recurred. In 1979, a more modest program of maintenance control or containment replaced eradication. This approach is often the only practical way to limit ecological or economic damage when eradication fails.

However, eradication and even maintenance control often require strong political will. Eradication and control activities that employ insecticides, herbicides, and poisons must be shown not to harm nontarget organisms and humans, and normal scientific standards of proof may not suffice with large elements of the public. Of course, the use of any pesticide today can be controversial.

Pesticides have successfully controlled some invaders, such as melaleuca in Florida and European cheatgrass in the West. However, pesticides are generally expensive, and many organisms evolve resistance to them. Some introduced species can be controlled mechanically, and some, such as water hyacinth, by a combination of herbicide and mechanical harvesters. With enough volunteers or cheap labor, handpicking or hunting can sometimes maintain animals and plants at acceptably low levels, at least locally.

Probably the main method of maintaining acceptable levels of introduced pest plants and animals is biological control: the introduction of a natural enemy (predator, parasite, or disease), often from the pest’s native range. Many biological control programs have achieved permanent low-level control of agricultural pests, and yearly benefits in the United States are around $180 million. However, a biological control agent is also an introduced species, and many survive without controlling the target pest. Whether or not they exert the desired control, some may attack nontarget organisms. In several instances, rare nontarget species have been attacked, and inadvertent extinction may even be attributed to some biological control projects. For example, a cactus moth introduced in 1957 in the Lesser Antilles to control a pest cactus island-hopped to Florida, where it nearly destroyed the desirable semaphore cactus.

Some estimates for insects introduced to control other insects are that 30 percent establish populations, but only a third of these effectively control the targets. For insects introduced to control weeds, about 60 percent establish populations, but again only a third control the target plant. Currently, there is insufficient monitoring to know the impacts of these surviving biological control agents on native species, but it is almost certain that once they are established they cannot be eradicated.

Because of the various problems with the different methods of control and their economic and potential political costs, Congress and state legislatures have resisted creating programs with broad authority to control invasive nonindigenous species. A good example is the Nonindigenous Aquatic Nuisance Prevention and Control Act of 1990, which was reauthorized and broadened in 1996. It establishes substantial hurdles that control programs must overcome, including the need to cooperate with other interested or affected parties. The zebra mussel invasion in the Great Lakes spawned this act, and it is really the first federal legislative effort that is specifically designed to prevent, monitor, conduct research on, and manage invasive nonindigenous species in natural areas.

The CDC’s management of human pathogens could serve as a model for controlling invasive species.

Cooperation is usually needed for successful prevention and control. However, agencies are notoriously jealous of their programs. They may not participate in or may even object to initiatives by others because of policy or resource impact concerns, or just because of the personalities involved. When chemical control is proposed, concerns about human health and the effects on nontarget organisms can quickly derail a program. Also, the ecological impacts of a nonindigenous species, especially if recently introduced, are usually incompletely understood or are a matter of scientific debate. This lack of knowledge can prevent agencies from responding quickly to eradicate or contain an invader. For example, the ruffe, a small perch-like European fish, became the most abundant fish species in Duluth/Superior Harbor since its discovery there in 1986. A program to prevent its spread eastward along the south shore of Lake Superior called for annually treating several streams flowing into the lake with a lampricide. Cooperation between various agencies foundered at the last moment because of turf issues, environmental concerns, and limited information about effects, and the ruffe is now expected to expand its range and become established in the warmer, shallower waters of Lake Erie. There it will probably negatively affect important fisheries such as that of the native yellow perch.

Aggressive state action

To control and manage such invasions, states must adopt rigorous white lists, despite the difficulties of doing so. Every proposed introduction must receive the scrutiny currently reserved for species known to have caused harm elsewhere. The literature and databases on introduced species are not sufficiently developed to allow state officials to determine easily whether a species has been problematic elsewhere; this fact alone dooms blacklists to failure. Further, evidence that a species is not problematic elsewhere is no proof that it will not cause damage. The Indian mynah bird is a pest in the Hawaiian islands, where it feeds on crop plants, is a vector for parasites of other birds, and spreads the pestiferous weed lantana. In New Zealand, it is equally well established but not seen as a serious pest. However, the fact that a species need not have the same impact wherever it is introduced can serve to make white lists less onerous. A plant that cannot overwinter in northern states, for example, might be white-listed there as long as federal or state restrictions on its shipment exclude it from states where it could be invasive.

A second major generic problem with state approaches to biological invasions is the lack of a coordinated rapid response. The adage “what is everybody’s business is nobody’s business” is all too true as it relates to the problem of invasive exotic species at the state and federal levels. The lessons of Florida’s successful efforts to control widespread exotic plants in its waterways illustrate the problems and solutions.

Before 1971, aquatic plant management activities were fragmented and piecemeal. Given the diverse ownership of public lands and their varying uses, many state agencies manage exotic species, but they tend to act without coordinating efforts, without adequate funding, and most important, without considering entire ecosystems. To succeed, a state must first do what Florida did in designating a lead agency to coordinate the efforts of local, state, and federal agencies and private citizens.

With such an approach, Florida has reduced water hyacinth infestation from 120,000 acres to less than 2,000. Other invaders of the state’s waterways and wetlands are in or near maintenance control. These low levels reduce environmental impacts, pesticide use to control them, and costs to taxpayers.

Unfortunately, vast areas of Florida are still being invaded by exotic plants, in large part because of a third problem: inadequate and inconsistent funding. States are often more committed to land acquisition than to proper land management, particularly if pest damage is not obvious or the record of introduction elsewhere is not dramatic. If maintenance control of a weed knocks the level back sufficiently that the public ceases to recognize it as a problem, state funding correspondingly drops. Once controls relax, an introduced species may spread rapidly, presenting a more expensive problem than if funding and management efforts had remained. Further, eradication is far more likely during the initial phase of an invasion than after a species is widely established.

Of course, removing an invasive species from public lands does little good if reinvasion quickly occurs from adjacent private lands. Legislatures must develop incentive programs to encourage private citizens to help control invasive exotic species. Tax incentives for removing exotics seem to be the most acceptable way to deal with this problem. If such incentives fail, legislatures should enact penalties, much as some cities require citizens to clear their sidewalks of ice and snow.

Finally, states must make strong educational efforts to ensure that the public understands the threats from nonindigenous species. Without an educated public and legislature, special interest groups can undermine the ability of state agencies to put a harmful species on a blacklist or to keep one off a white list.

Federal leadership

More than 20 federal agencies have jurisdiction over the importation and movement of exotic species, introductions of new ones, prevention or eradication of exotic species, and biological control research and implementation. However, no overall national policy safeguards the United States from biological invasions, and often federal and state agency policies conflict with one another. The Federal Interagency Committee for the Management of Noxious and Exotic Weeds has recently taken a small positive step by devising a National Strategy for Invasive Plant Management. This document promotes effective prevention and control of invasive exotic plant species and restoration or rehabilitation of native plant communities. More than 80 federal, state, and local government agencies, nonprofit organizations, scientific societies, and private sector interests have endorsed this nonbinding resolution. Although an important first step, it is basically educational and does not suggest specifically how to deal with weed problems on the ground. It still falls far short of an effective national program and does not address invasions by nonindigenous animals.

Lacking at the federal level are leadership, coordination of management activities on public lands, public education, and a strong desire to prevent new invasions. A parallel may be seen in the Centers for Disease Control and Prevention, with its missions of preventing new invaders, monitoring outbreaks, conducting and coordinating research, developing and advocating management practices, recommending and implementing prevention strategies, dealing with state and local governments, and providing leadership and training. Perhaps the federal government could develop an analog for invasive plants and animals. A high-level interdepartmental committee might serve much the same function-perhaps an enlarged version of the Federal Interagency Committee for the Management of Noxious and Exotic Weeds or the Aquatic Nuisance Species Task Force with a greatly expanded mission.

Independently of such structural changes, we must enhance state and federal programs in order to use agency personnel more effectively, develop nationwide consistency and cost effectiveness, conduct risk analysis, review and develop legal and economic policies, lower administrative costs, and eliminate duplication of effort. For instance, because APHIS budgets are prepared two years in advance, it is difficult for the agency to fund adequately an immediate response campaign. Also, basic research on an introduced species reflects the curiosity and idiosyncrasies of individual academicians and is not focused or coordinated very well.

Complicating the policy issues is international trade, the single greatest pathway for harmful introduced species, which stow away in ships, planes, trucks, containers, and packing material. Increased trade produced by the North American Free Trade Agreement (NAFTA) and the General Agreement on Tariffs and Trade (GATT) is bound to increase the problem. Of 47 harmful species introduced into the United States between 1980 and 1993, a total of 38 came in via trade.

Under NAFTA and GATT, restrictions claimed as measures to protect the environment can be challenged before the relevant regulatory body, which will decide whether the restriction is valid or simply protectionist. In GATT’s case, the body is the World Trade Organization (WTO), which ruled in an analogous case that the European Union could not prohibit imports of beef from cattle treated with hormones. The WTO ruled that evidence of a health threat was insufficient.

For NAFTA and GATT, species exclusions are to be based on risk assessments, many of which require judgment calls by researchers. The effects of introduced species are so poorly understood and the record of predicting which ones will cause problems is so bad that one can question how much credence to place in a risk assessment. Also, the growing complication of risk assessment methods makes them less meaningful to the lay public and perhaps less responsive and relevant to policy needs. Particularly in controversial cases, as in many concerning introduced species, agreement by all parties is unlikely. Further, assessments are expensive, costing as much as hundreds of thousands of dollars, and funding sources are not established.

To address these trade issues, the federal government must be committed to limiting the import of exotic pests and must present a coordinated federal strategy to support restrictions. As a first step, the National Research Council should convene a high-level scientific committee to review the generic risk assessment processes produced by USDA and the Aquatic Nuisance Species Task Force. Also, all federal agencies that have a role in the trade process must have a common policy on what risk assessment to use and how to pay for it.

The growth of international trade only exacerbates a dire situation. A growing army of invasive exotic species is overrunning the United States, causing incalculable economic and ecological costs. Federal and state responses have not stemmed this tide; indeed, it has risen. Only a massive reworking of government policies and procedures at all levels and a greatly increased commitment to coordinating efforts can redress this situation.

The ITER Decision and U.S. Fusion R&D

The United States must soon decide whether to participate in the construction of the International Thermonuclear Experimental Reactor (ITER). ITER is the product of a years-long collaboration among several countries that is both a major advance in fusion science and a major step toward a safe and inexhaustible energy supply for humanity: practical power from fusion.

The decision about joining this international collaboration is evolving within the context of a severely constrained U.S. budget for fusion R&D. The United States is, by default, on the verge of deemphasizing within its national program the successful mainline Tokamak concept that has advanced to the threshold of fusion energy production. Significant participation in ITER is the only way at present in which the United States can remain involved in the experimental investigation of the leading issues of fusion plasma science and in developing technological aspects of fusion energy. The decision has broad implications for the national interest and for future international collaboration on major science projects, as well as for the next 30 years of fusion R&D. It would be tragic for the United States to miss this opportunity, which it has been so instrumental in creating, to benefit fully from collaboration in ITER.

Fusion R&D seeks to create and maintain the conditions under which the sun and the stars produce energy. Most fusion R&D has been concentrated on the toroidal magnetic confinement configuration known as the Tokamak. Scientific achievements in the early Tokamaks, together with the energy crisis of the 1970s, led to increased funding for fusion R&D worldwide, which allowed the building of the present generation of large Tokamak experiments in the United States, Europe, and Japan. The world’s nations are currently spending more than $1.5 billion annually (about $600 million in Europe; $400 million in Japan; $230 million in the United States; a large amount in Russia; and smaller expenditures in Australia, Brazil, Canada, China, India, Korea, and so on).

Scientific progress in Tokamak fusion research has been steady and impressive. Researchers had heated plasmas to above solar temperatures by the late 1970s. The “triple product” of the density, temperature, and energy confinement time, which is related to a reactor’s ability to maintain a self-sustaining fusion plasma, has increased a thousandfold since the early 1970s and is now within a factor of less than 10 of what is needed for practical energy production. The production of fusion power has increased from a fraction of a watt in the early 1970s to ten million watts.

U.S. fusion program

The U.S. fusion program currently operates two Tokamaks that are producing state-of-the-art scientific results: DIII-D, at General Atomics in San Diego, was designed and built during the 1970s. The newer ALCATOR-CMOD is the most recent in a line of small, relatively inexpensive facilities pioneered at MIT. A third major Tokamak, the TFTR at Princeton Plasma Physics Laboratory, ceased operation in 1997.

As a result of 25 years of intensive development worldwide, the Tokamak configuration has now reached the point where a new facility that would significantly advance the state of the art would cost at least a billion dollars. Twice in the past decade, Congress has rejected proposals to build a new U.S. Tokamak. Once the aging DIII-D is decommissioned, which will probably happen within 5 years, the United States will no longer have a large Tokamak in operation.

The United States has not only failed to build new Tokamak experiments, it has also not adequately supported the operation of the experiments it did build or the complementary parts of the fusion program. After peaking at about $600 million (in 1995 dollars) in the late 1970s, the annual U.S. fusion budget has declined to $232 million this year. As a result, fusion researchers had to abandon many worthwhile efforts. Concentrating on the Tokamak configuration has paid off in terms of scientific advancement, but it has substantially narrowed the scientific and institutional bases of the U.S. fusion program. Several experimental facilities intended to explore alternative magnetic-confinement concepts were shut down prematurely. The fusion technology program was reduced drastically. The broader objectives and the schedule associated with the former goal of practical fusion energy have recently been delayed, replaced by the more limited objective of exploring the underlying science.

If the United States does not support ITER, it will be abandoning the centerpiece of its program and the Tokamak concept at a time when advanced operating modes for achieving enhanced performance and more attractive reactor prospects are rapidly developing. By necessity, the emphasis would shift to alternative confinement concepts for which a state-of-the-art facility is more affordable. Unfortunately, the reason that the cost is lower is that these concepts are at least 20 years behind the Tokamak.

Fusion’s promise

The arguments for federal support of fusion research seem compelling. Fusion promises to be the ultimate energy source for mankind because its fuel supply is virtually limitless. The conceptual designers of future commercial fusion plants project electricity costs in the same range as those projected for nuclear and fossil fuels 50 years from now, although projections so far into the future are not very reliable for either. Of all possible energy sources, fusion seems to have the least potential for adverse environmental impact. There are also numerous spinoff applications of fusion R&D. In short, fusion would seem to be the type of long-term high-payoff R&D that Congress should fund adequately.

But after supporting a successful program that led the world for most of the past 30 years, the federal government is no longer maintaining a first-rank national fusion program. It is not clear whether this is simply because competing claims for scarcer resources have attracted stronger support in Congress or because fusion has fallen out of favor.

One criticism is that “fusion is always 25 years away.” There is some truth in this complaint. Fusion plasmas have turned out to be more complex than anticipated by the pioneers of the field. On the other hand, the R&D program proposed by research managers in the 1970s to demonstrate fusion power early in the next century was never funded at anywhere near the level required to achieve such an ambitious objective.

Another criticism is that nobody would want Tokamaks even if they worked because they would be too big and complex to be practical. It is true that if one simply extrapolated from the design of the existing experiments, the result would be a large expensive commercial reactor. For many years, researchers paid little attention to optimizing performance because their focus was on understanding the physics phenomena inside Tokamaks. But researchers have recently demonstrated that the internal configuration can be controlled to achieve substantially improved performance that suggests that more compact reactors may turn out to be practical. The very complexity of the interacting physics phenomena that govern Tokamak performance creates numerous opportunities where further improvements may be achieved in the future.

The ITER project

A landmark in fusion development occurred in the 1980s, when the United States joined with the European Union, Japan, and the USSR in the International Tokamak Reactor (INTOR) Workshop (from 1979 to 1988), and since 1988 in the ITER project, to work collaboratively toward designing and building a large experimental reactor. Since 1992, the partners have been collaborating on an engineering design that could serve as a basis for government decisions to proceed with construction of ITER beginning in 1998. The design and R&D are being coordinated by an international joint central team of about 150 scientists and engineers plus support staff. A much larger number of laboratory, university, and industrial scientists and engineers are members of “home teams” in Europe, Japan, Russia, and the United States. The most recent design report runs to thousands of pages, including detailed drawings of all systems and plant layouts. A final design report is scheduled for July 1998, and the procurement and construction schedule supports initial operation of ITER in 2008.

Technology R&D to confirm the ITER design is being performed in the laboratories and industries of the four ITER collaborators under the direction of the ITER project team. Total expenditures for this R&D over the six years of the design phase will be about $850 million in 1995 dollars. The cost (in 1995 dollars) of constructing ITER is estimated at $6 billion for the components, $1.3 billion for the buildings and other site facilities, $1.16 billion for project engineering and management, and $250 million for completion of component testing. Thus, with an allowance for uncertainty, ITER’s total construction cost is estimated at about $10 billion. Subsequent operating costs are estimated at $500 million per year.

After construction, ITER would operate for 20 years as an experimental facility. Initially, the emphasis would be on investigating new realms of plasma physics. ITER would be the first experiment in the world capable of definitively exploring the physics of burning plasmas-plasmas in which most of the power that maintains the plasma at thermonuclear temperatures is provided by the deuterium-tritium fusion events themselves. The second broad objective of ITER is to use the reactorlike plasma conditions to demonstrate the technological performance necessary for practical energy-producing fusion. The superconducting magnet, heating, fueling, tritium handling, and most other systems of ITER will be based on technology that can be extrapolated to a prototypical fusion reactor.

Future fusion reactors must be capable of replenishing the tritium fuel they consume, but this technology will not be sufficiently developed to incorporate into ITER at the outset. Similarly, more environmentally benign advanced structural materials that are capable of handling higher heat fluxes are also being developed, but not in time for use in constructing ITER. Thus, the third major objective of ITER is to provide a test facility for nuclear and materials science and technology development.

After ITER would follow a fusion demonstration reactor (DEMO) intended to establish the technological reliability and economic feasibility of fusion for producing electrical power. The national plans have been for the DEMO to follow 15 to 25 years after ITER initial operation in order to exploit the information developed in ITER. Each party presumably would build its own DEMO as a prototype of the system it plans to commercialize, but further collaboration at the DEMO stage is also possible.

A Tokamak DEMO will be smaller than ITER for two reasons. First, ITER is an experimental device that must include extra space to ensure flexibility and to allow for diagnostics and test equipment. Second, and more important, advanced Tokamak modes of plasma operation can be explored in ITER and subsequently used to design a DEMO based on improved performance characteristics. A recent study showed that the DEMOs could be designed at about half the ITER volume.

It is not widely recognized that the plasma performance and technology demonstrated in ITER will be sufficient for the construction of large-volume neutron sources that could meet several national needs, including neutron and materials research, medical and industrial radioisotope production, tritium production, surplus plutonium disposition, nuclear waste transmutation, and energy extraction from spent nuclear fuel. Recent studies have shown that it would be possible to use a fusion neutron source based on ITER physics and technology for such applications.

Time to decide

The time for decisions about moving into the ITER construction phase, about the identity and contributions of the parties to that phase, and about the siting of ITER is close at hand. The four partners are currently involved in internal discussions and informal interparty explorations. The prime minister of the Russian Federation has already authorized negotiations on ITER. The chairman of the Japan Industry Council has called publicly for locating ITER in Japan, and a group of prominent Japanese citizens is working to develop a consensus on siting. In 1996, the European Union Fusion Evaluation Board, an independent group of fusion researchers, declared that “starting the construction of ITER is therefore recommended as the first priority of the Community Fusion Program” and that “ITER should be built in Europe, as this would maintain Europe’s position as world leader in fusion and would be of great advantage to European industry and laboratories.”

To date, the United States has been the least forthcoming. Reductions in the fusion budget have already forced the United States to trim its annual contribution to the ITER design phase from its promised $85 million to $55 million. Officials in the U.S. Department of Energy (DOE) have discussed informally with their foreign counterparts the possibility of participating in ITER construction with a $55 million annual contribution, which would cover about 5 percent of the total construction cost. It is unlikely that the ITER construction can go forward with this minimal U.S. contribution.

In fact, this U.S. position has been a major factor contributing to the inability of the sponsoring governments to move toward an agreement to ensure that construction can begin as planned in July 1998, when the present ITER design agreement ends. As a result, informal discussions about a three-year transition period between the end of the design phase and the formal initiation of construction have recently intensified. Such a transition phase, if adequately funded, could accomplish many of the tasks that would normally be accomplished in the first years of the construction phase but would inevitably have an impact on the momentum, schedule, and cost of the project.

Part of the decision about building ITER is selecting a site. To date, Japan, Canada, France, Sweden, and Italy have informally indicated an interest in playing host to ITER, whereas the United States has indicated that it will not offer a site.

The United States should reconsider. The Savannah River Site (SRS) in South Carolina satisfies all ITER site requirements with no need for design modifications, and other government nuclear laboratories such as Oak Ridge in Tennessee also meet the site requirements. SRS’s extensive facilities, which include sea access for shipping large components, would result in a site credit (a cost for a new site that would be unnecessary because of existing facilities)] of about $1 billion. SRS’s existing expertise and infrastructure are directly relevant to ITER needs and would complement the fusion expertise of the ITER project team. This SRS expertise and infrastructure must be maintained in any case for national security; they could be used by ITER for little additional real cost.

There are at least two tangible advantages to hosting ITER. First, fusion engineering expertise and infrastructure will be established at the host site. The host country subsequently will be able to use this residual expertise and infrastructure for building and operating one or more fusion neutron source facilities and/or to construct a DEMO. Second, estimates suggest that ITER will contribute $6 billion to the local economy over a period of 30 years.

A bonanza of benefits

Scientific and technical. The primary objective of the U.S. fusion program is to study the science that underlies fusion energy development. ITER offers the United States a way of participating in the investigation of the leading plasma science issues of the next two decades for a fraction of the cost of doing it alone. ITER will provide the first opportunity to study the plasma regime found in a commercial fusion energy reactor, the last major frontier of fusion plasma science. Under the present budget, participation in ITER would seem to be the only way in which the United States can maintain significant participation in the worldwide Tokamak experimental program, which is far advanced by comparison with other confinement configurations. In short, participation in ITER is actually the only opportunity for the United States to remain at the forefront of experimental fusion plasma science over the next few decades.

Fusion energy also requires the development of plasma technology and fusion nuclear science and technology. ITER will demonstrate plasma and nuclear technologies that are essential to a fusion reactor in an integrated system, and it will provide a facility for fusion nuclear and materials science investigations. Participation in ITER not only allows the United States to share the costs of these activities but is the only opportunity for the United States to be involved in essential fusion energy technology development. These ITER studies of the physics of burning plasmas and nuclear and materials science, plus the technology demonstrations, are relevant not just to the Tokamak but also to alternate concepts of magnetic confinement.

Industrial. Many of the ITER components will require advances in the state of the art of design and fabrication. Involvement in this process would enhance the international competitiveness of several U.S. high-tech industries and would surely result in a number of spinoff applications, as well as positioning U.S. firms to manufacture such components for future fusion devices. The participation of U.S. firms in ITER device fabrication would be proportional to the U.S. contribution to ITER construction (excluding site costs) and would be independent of site location.

Political. Two decades ago, the countries of the European Community joined forces to build and operate the successful Joint European Torus fusion experiment, which served the larger political purpose of a major collaborative project at the time of the formation of the European Community. ITER could provide a similar prototype for collaboration on scientific megaprojects among the countries of the world, leading to enormous savings in the future. ITER is perhaps unique among possible large collaborative science projects in that it has been international from the outset. ITER characteristics and objectives were defined internationally, its design has been carried forward by an international team, and the supporting R&D has been performed internationally. ITER represents an unprecedented international consensus.

The U.S. ITER Steering Committee recently completed a study of possible options for continued U.S. participation in ITER. It concluded: “The U.S. fusion program will benefit from a significant participation in ITER construction, operation and testing, and particularly from a level of participation that would enable the U.S. to influence project decisions and have an active project role.” An important finding of this study is that, by concentrating contributions in its areas of strength, the U.S. could play so vital a role in the project that it might be able to obtain essentially full benefits while contributing as little as $120 million annually (one-sixth of the ITER construction cost, exclusive of site costs), provided that the other three parties would agree to such a distribution of effort.

Feasibility

The ITER design is based on extrapolation of a large body of experimental physics and engineering data and on detailed engineering and plasma physics analyses. The overall design concept and the designs for the various systems have evolved over 25 years. The physics extrapolation from the present generation of large Tokamaks to ITER is no larger than extrapolations from the previous to present generations of Tokamaks: a factor of four in plasma current and a factor of three in the relevant size parameter.

A large fraction of the world’s fusion scientists and engineers, in addition to some 250 who are full-time members of the ITER joint central team and the home teams of the four partners, have helped develop the technological basis for the ITER design and also reviewed that design and the supporting R&D. Perhaps a thousand fusion scientists and engineers in the national fusion programs of the ITER participants are involved part-time. Aspects of the design have been presented and discussed in hundreds of papers at technical meetings. International expert groups make recommendations to the ITER project in several areas of plasma physics. An ITER technical advisory committee of 16 prominent senior fusion scientists and engineers who are not otherwise involved in the ITER project meet two to four times per year to review the design and supporting R&D. Each of the four ITER parties has one or more ITER technical advisory committees.

The ITER conceptual design (1990) was reviewed by about 50 U.S. fusion scientists and engineers independent of ITER; similar reviews were held by the other three ITER parties. The ITER interim design (1995) was reviewed by the ITER technical advisory committee, by formal review committees within the other three parties, and by various groups within the United States.

The ITER detailed design (1996) was recently subjected to a four-month in-depth review by a panel of the U.S. Fusion Energy Science Advisory Committee (FESAC). The report of the FESAC panel, which was made up of about 80 scientists and engineers, most of whom were not involved in ITER, concluded: “Our overall assessment is that the ITER engineering design is a sound basis for the project and for the DOE to enter negotiations with the Parties regarding construction. There is high confidence that ITER will be able to study long pulse burning plasma physics under reduced conditions as well as provide fundamental new knowledge on plasma confinement at near-fusion-reactor plasma conditions. The panel would like to reaffirm the importance of the key elements of ITER’s mission-burning plasma physics, steady-state operation, and technology testing. The panel has great confidence that ITER will be able to make crucial contributions in each of these areas.”

An independent review of the detailed design by a large committee in the European Union concluded that “The ITER parameters are commensurate with the stated objectives, and the design provides the requisite flexibility to deal with the remaining uncertainties by allowing for a range of operating conditions and scenarios for the optimization of plasma performance.” A similar Russian review noted that “the chosen ITER physics parameters, operation regimes and operational limits seem to be optimal and sufficiently substantiated.” Japan is also carrying out a similar technical review.

We must keep in mind, however, that ITER is an experiment. Its very purpose is to enter unexplored areas of plasma operation and to use technologies not yet proven in an integrated fusion reactor system; by definition, there are some unresolved issues. Moreover, a major project such as ITER, which would dominate the world’s fusion R&D budgets for three decades, is a natural lightning rod for criticism by scientists with a variety of concerns and motivations. Some scientists have raised questions about ITER. Their concerns generally fall into one of two categories: They suggest either that the plasma performance in ITER may not be as good as has been projected or that ITER is too ambitious in trying to advance the state of the art in plasma physics and technology simultaneously. These concerns are being addressed in the ITER design and review process. In sum, the preponderance of informed opinion to date is that the ITER design would meet its objectives.

The right choice

Alternatives to ITER have been discussed. For example, the idea of addressing the various plasma, nuclear, and technology issues separately in a set of smaller, less costly experiments is appealing and has been suggested many times. This idea was undoubtedly in the minds of the President’s Council of Advisors on Science and Technology (PCAST) when they recently suggested, in the face of the then-impending cut in the fusion budget, that the United States propose to its ITER partners that they collaborate on a less ambitious and less costly fusion plasma science experiment. A subsequent study by a U.S. technical team estimated that PCAST’s proposed copper magnet experiment would cost half as much as ITER but would accumulate plasma physics data very slowly and would not address many of the plasma technology issues nor any of the nuclear science and technology issues. The PCAST suggestion was informally rejected by the other ITER partners. Other manifestations of a copper magnet experiment with purely fusion plasma science objectives have been rejected in the past, formally and informally, as a basis for a major international collaboration.

The ITER technical advisory committee and the four ITER partners, acting through their representatives to the ITER Council, recently once again endorsed the objectives that have determined the ITER design: “The Council reaffirmed that a next step such as ITER is a necessary step in the progress toward fusion energy, that its objectives and design are valid, that the cooperation among four equal Parties has been shown to be an efficient framework to achieve the ITER objectives and that the right time for such a step is now.” The report of the independent 1996 European Union Fusion Evaluation Board states “Fusion R&D has now reached a stage where it is scientifically and technically possible to proceed with the construction of the first experimental reactor, and this is the only realistic way forward.” In sum, there is a broad international agreement that ITER is the right next step.

What is to be done

Given the present federal budget climate, considerable leadership will be needed to realign the evolving government position on ITER and the national fusion program with what would seem to be the long-term national interest. I suggest the following actions:

  • The U.S. government should commit to participation in ITER construction and operations as a full partner and should announce in 1997 its willingness to enter formal ITER negotiations. The U.S. contribution to ITER should be increased from the present $55 million annually for the design phase to $100 to $150 million annually by the start of the construction phase. ITER construction funding should be budgeted as a line item separate from the budget of the U.S. national fusion program in order to ensure the continued strength of the latter.
  • The U.S. government should support the initiation of the ITER construction phase immediately after the end of the Engineering Design Activities agreement in July 1998. If a transition phase proves at this point to be a political necessity, the U.S. government should work to ensure that the transition phase activities are adequately supported to minimize delay in the project schedule.
  • At least $300 to $350 million annually is necessary to allow the United States to benefit from the opportunity provided by ITER for plasma physics and fusion nuclear and materials science experimentation and for fusion technology development, as well as to carry out a strong national program of fusion science investigations. This would make total annual U.S. fusion spending $400 to $500 million (in 1995 dollars) during the ITER construction period.
  • The U.S. government should prepare a statement of intent offering to host ITER and transmit it to its partners by the due date of February 1998. One of the government nuclear laboratory sites should be identified for this purpose. The site costs for ITER are estimated at $1.3 billion, and site credit for existing infrastructure and facilities at a site such as SRS could be near $1 billion. Because it has suitable existing facilities, the United States could host ITER as a significant part of its contribution to the project without a major up-front cash outlay.