Scientific Truth-Telling

The Ascent of Science is a magisterial, witty, and certainly perceptive guide to how contemporary science came to be. It is an idiosyncratic tour, a reflection of the author’s tastes, enthusiasms, and dislikes: Good theory matters more than experiment because “the Oscar winners in the history of science have almost always been creators of theory.” The great ideas within that enormous enterprise we label “physics” are the most interesting to Brian Silver; and although the obligation to discuss biology (reductionist and organismic) is met, the social, behavioral, and economic sciences are ignored.

The Ascent of Science does not directly address contemporary science policy concerns except for a brief and largely ineffective coda toward the end of the book, in which Silver (a professor of physical chemistry at the Technicon Israel Institute of Technology, who died shortly after he completed the book) sallies, with a style more limpid than most excursions into this territory, into how science can be better understood by the public. The book finishes with an effort to say something about the future, which, although elegantly put, says little, because little of any substance can be said. The publisher probably insisted on it. These niggles aside, the bulk of the book is dazzling. It limns with verve and deep understanding the rise of the great ideas of contemporary science: the emergence of the fundamental structure of the atom, the stunning originality of general relativity, the weirdness and importance of the quantum, the bizarre duality of matter as both wave and particle, the splendid history behind the contemporary ideas about evolution, and more.

In all, it is a dazzling offering, in content, style, humor (how many writers about science quote Madonna: “I’m a material girl in a material world”?), and humaneness. By “humaneness” I mean that Silver addresses his book to what he calls HMS: the “homme moyen sensuel”-the average man (or woman) with feelings. “HMS remembers little or nothing that he learned in school, he is suspicious of jargon, he is more streetwise than the average scientist, he is worried about the future of this planet, he may like a glass of single-malt whiskey to finish off the day.” He is a bit like Steven Weinberg’s smart old lawyer, who serves as the model reader for his The First Three Minutes. HMS, I would venture, is not a bad model for the sort of reader Issues is edited for.

And although Silver offers no policy prescriptions, he has much to say that is useful to those who make science policy. He offers, without pretension or false enthusiasm, insights into the development of contemporary science. He emphasizes that at times quite irrational forces drove great science. Thus, part of Newton’s genius was the ease with which he reconciled his efforts to predict the day the world would end with his laws of planetary movement. “He didn’t believe,” writes Silver, in the style that typifies the book, “that the almighty spent his days and nights chaperoning the universe, a permanent back seat driver for the material world. Once the heavenly bodies had been set in motion, that motion was controlled by laws. And the laws were universal.” And he is contemptuous of scientific “insights” that offer the verisimilitude of science but are not supported by experiment, mathematics, or testable theories. Thus, “to seek inspiration from [Goethe’s] mystical pseudoscience is like taking spiritual sustenance from Shakespeare’s laundry list.” He is angered by ill-based criticisms of science, labeling Jeremy Rifkin’s invention of “material entropy” as “meaningless.”

Criticism of scientists, too

But Silver is a relentlessly honest man, and so he heaps equal dollops of scorn on his colleagues. He is especially and repetitively savage about what he sees as the extravagant claims made for particle physics, arguing that once the proton, neutron, and electron were found and their properties experimentally confirmed, the very expensive searches for ever more exotic particles, such as the Higgs Boson, were increasingly harder to justify other than by their importance to particle physicists. “If we had never discovered the nuclear physicists’ exotic particles, life as we know it on this planet would be essentially the same as today. Most of the particles resemble ecstatic happiness: They are very short-lived and have nothing to do with everyday life.” His assault, repeated several times in the book, goes to sarcasm: “Finding the Higgs Boson will be a magnificent technical and theoretical triumph. Like a great Bobby Fisher game.” Or “if the Higgs Boson represents the mind of God, then I suggest we are in for a pretty dull party.” Of course, this is a tad unfair, even if some of the claims of its practitioners invite such assaults on their field. Although some particle physicists are contemptuous of questions about why taxpayers should support their costly science, there are others who provide thoughtful analyses of how their research benefits the work of scientists in other fields and contributes to national goals.

Silver has other targets in science. He scorns what he calls the “strange articles” on the composition of interstellar dust clouds published about 20 years ago by Fred Hoyle and Chandra Wickramasinghe as resting on “shameless pseudoscience.” He attacks what he believes are exaggerated interpretations of the 1953 experiments by Stanley Miller seeking to validate some ideas about how life on Earth may have originated. Using a broader lens, he is more understanding but still impatient with the conservatism of science, its tendency to refuse credence to new ideas until reason overwhelms. He cites the 19th-century resistance to the notion of the equivalence of heat and work and the conservation of energy, even when giants of the day, such as Hermann von Helmholtz, wrote arguments for these ideas. “You guessed it: rejected.” Helmholtz published using his own money. What sweetens this saltiness is how firmly and fairly he can appraise even those who were wrong but gave it an honest try. He is a great and fervent admirer of Lucretius, even though “almost everything he wrote was wrong.”

Silver is also a refreshing foil to those who would elevate scientists to a priesthood of truth. He mocks the mirage of a science as “an activity carried out by completely unprejudiced searchers-after-knowledge, floating free of established dogma. That is the Saturday Morning Post, white-toothed, smiling face of science.” Indeed, his impatience with science rooted in philosophical systems is palpable. He tells the story of J. J. Thomson in Britain and Walter Kaufmann in Germany, both of whom at about the same time found evidence suggesting the existence of the electron (in fact, Kaufmann’s data were better). But Thomson went on to speculate on the electron’s existence and won a Nobel Prize for it. Kaufmann, a devotee of Ernst Mach’s logical positivist beliefs that only what could be directly verified existed, didn’t speculate. He is a historical footnote. Silver mischievously adds that “If you want to annoy a logical positivist, ask him if the verifiability principle stands up to its own criteria for verifiability.”

Good science policy is critically dependent on the sort of hard-edged and knowing judgments made by Silver. It depends on carving though the competing claims for this or that discipline, this or that discovery, this or that glowing promise. It is enabled by people such as Silver who lived the life, who know how hard good science is, who understand that new work is hardly ever formed out of whole cloth but does in fact rest on “the shoulders of giants,” and who are willing to be brutally honest even if that makes for uncomfortable moments within the community. This book is a reminder of how the policies that will continue the remarkable ascent of science depend on truth-tellers who understand science as well as the need for adamantine adherence to the motto of the Royal Society: “Nullius in Verba,” which Silver translates as “don’t trust anything anyone says.” But enough of messages; The Ascent of Science is a splendid read.

Research Support for the Power Industry

A revolution is sweeping the electric power industry. Vertically integrated monopoly suppliers and tight regulation are being replaced with a diversified industry structure and competition in the generation and supply of electricity. Although these changes are often termed “deregulation,” what is actually occurring is not so much a removal of regulation as a substitution of regulated competitive markets for regulated monopolies.

From 1995 to 1996, the electric and gas industry reduced private R&D funding in absolute terms and cut basic research by two thirds.

Why is this change occurring? Cheap plentiful gas and new technology, particularly low-cost highly efficient gas turbines and advanced computers that can track and manage thousands of transactions in real time, have clearly contributed. However, as with the earlier deregulation of the natural gas industry, a more important contributor is a fundamental change in regulatory philosophy, based on a growing belief in the benefits of privatization and a reliance on market forces. In the United States, this change has been accelerated by pressure from large electricity consumers in regions of the country where electricity prices are much higher than the cost of power generated with new gas turbines.

Although the role of technology has thus far been modest, new technologies on the horizon are likely to have much more profound effects on the future structure and operation of the industry. How these technologies will evolve is unclear. Some could push the system toward greater centralization, some could lead to dramatic decentralization, and some could result in much greater coupling between the gas and electric networks. The evolution of the networked energy system is likely to be highly path-dependent. That is, system choices we have already made and will make over the next several decades will significantly influence the range of feasible future options. Some of the constituent technologies will be adequately supported by market-driven investments, but many, including some that hold great promise for social and environmental benefits, will not come about unless new ways can be found to expand investment in basic technology research.

New technologies in the wings

Several broad classes of technology hold the potential to dramatically reshape the future of the power system: 1) solid-state power electronics that make it possible to isolate and control the flow of power on individual lines, in subsystems, within the power transmission system, and in end-use devices; 2) advanced sensor, communication, and computation technologies, which in combination can allow much greater flexibility, control, metering, and use efficiency in individual loads and in the system; 3) superconducting technology, which could make possible very-high-capacity underground power transmission (essentially electric power pipe lines), large higher-efficiency generators and motors, and very short-term energy storage (to smooth out brief power surges); 4) fuel cell technology for converting natural gas or hydrogen into electricity; 5) efficient, high-capacity, long-term storage technologies (including both mechanical and electrochemical systems such as fuel cells that can be run backward to convert electricity into easily storable gas) which allow the system to hold energy for periods of many hours; 6) low-cost photovoltaic and other renewable energy technology; and 7) advanced environmental technologies such as low-cost pre- and postcombustion carbon removal for fossil fuels, improved control of other combustion byproducts, and improved methods for life-cycle design and material reuse.

Two of these technologies require brief elaboration. The flow of power through an alternating current (AC) system is determined by the electrical properties of the transmission grid. A power marketer may want to send power from a generator it owns to a distant customer over a directly connected line. However, if that line is interconnected with others, much of the power may flow over alternative routes and get in the way of other transactions, and vice versa. Flexible AC transmission system (FACTS) technology employs solid-state devices that can allow system operators to effectively “dial in” the electrical properties of each line, thus directing power where economics dictates. In addition, existing lines can be operated without the large reserve capacity necessary in conventional systems, which can make it possible to double transmission capacity without building new lines.

Distributed generation, such as through small combustion turbines, fuel cells, and photovoltaics, with capacities of less than a kilowatt to a few tens of megawatts, also holds the potential for revolutionary change. Small gas turbines, similar to the auxiliary power units in the tails of commercial airplanes, are becoming cheap enough to supply electricity and heat in larger apartment and office buildings. As on aircraft, when mechanical problems develop, a supplier can simply swap the unit out for a new one and take the troublesome one back to a central shop. Fuel cells are becoming increasingly attractive for stationary applications such as buildings and transportation applications such as low-pollution vehicles. The power plant for an automobile is larger than the electrical load of most homes. Thus, if fuel cell automobiles become common and the operating life of their cells is long, when the car is at home it could be plugged into a gas supply and used to provide power to the home and surplus power to the grid, effectively turning the electric power distribution system inside out. Finally, the cost of small solar installations continues to fall. The technology is already competitive in niche markets, and if climate change or electric restructuring policies make the use of coal and oil more expensive or restrict it to a percentage of total electric generation, then in a few decades much larger amounts of distributed solar power might become competitive, particularly if it is integrated into building materials.

We have grown accustomed to thinking about electricity and gas as two separate systems. In the future, they may become two coupled elements of a single system. There is already stiff competition between electricity and gas in consumer applications such as space heating and cooling. Gas is also the fuel of choice for much new electric generation. Owners of gas-fired power plants are beginning to make real-time decisions about whether to produce and sell power or sell their gas directly. Such convergence is likely to increase. Unlike electricity, gas can be easily stored. To date, most interest in fuel cells has been in going from gas to electricity. But, especially in association with solar or wind energy, it can be attractive to consider running a fuel cell “backward” so as to make storable hydrogen gas.

These are only a few of the possibilities that new technology may hold for the future of the power industry. Whether that future will see more or less decentralization and whether it will see closer integration of the gas and electricity systems depends critically on policy choices made today, the rate at which different technologies emerge, the relative prices of different fuels, and the nature of the broader institutional and market environment. What does seem clear is that big changes are possible. With them may come further dramatic changes in the structure of the industry and in the control strategies and institutions that would be best for operating the system.

Electric power is not telecommunications

It is tempting to conclude that the changes sweeping electric power are simply the power-sector equivalent of the changes we have been witnessing in telecommunications for more than a decade. But although a change in regulatory and economic philosophy has played an important part in initiating both, the role played by technology and by organizations that perform basic technology research has been and will likely continue to be very different in the two sectors.

New technology played a greater role in driving the early stages of the revolution in the telecommunications industry. Much of the basic technology research that provided the intellectual building blocks for that industry was done through organizations that have no equivalent in the power sector. An obvious example is Bell Telephone Laboratories. For historical and structural reasons, the power industry never developed an analogous institution and for many years invested a dismayingly small percentage of its revenues in research of any kind. Even in recent years, firms in the electric industry have spent as little as 0.2 percent of their net sales on R&D, whereas the pharmaceutical, telecommunications, and computer industries spend between 8 and 10 percent.

The aftermath of the 1966 blackout in the northeast, which brought the threat of congressionally mandated research, finally induced the industry to create the Electric Power Research Institute (EPRI). Today EPRI stands as one of the most successful examples of a collaborative industry research institution. But for a number of reasons, including the historically more limited research tradition of the power industry, pressures from a number of quarters for rapid results, and the dominant role of practically oriented engineers, it has always favored applied research. Nothing like the transistor, radio astronomy, and the stream of other contributions to basic science and technology that flowed from the work of the old Bell Labs has emerged from EPRI. Of course, with the introduction of competition to the telecommunications industry, Bell Labs has been restructured and no longer operates as it once did. But in those years when research could be quietly buried in the rates paid by U.S. telephone customers, Bell Labs laid a technological foundation that played an important role in ultimately undermining monopoly telephone service and fueling the current telecommunications revolution.

Bell Labs was not the only source of important basic technology research related to information technology. Major firms fueled some of the digital revolution through organizations like IBM’s Thomas J. Watson Research Center, but government R&D, much of it supported by the military, was even more important in laying the initial foundations. For example, academic computer science as we know it today was basically created by the Defense Advanced Research Projects Agency (DARPA) through large sustained investments at MIT, Stanford, Carnegie Mellon, and a few other institutions.

Some analogous federal research has benefited the electric power industry. Civilian nuclear power would never have happened without defense-motivated investments in nuclear weapons and ship propulsion as well as investments in civilian nuclear power by the Atomic Energy Commission and the Department of Energy (DOE). Similarly, the combustion turbines that are the technology of choice for much new power generation today are derived from aircraft engines. Although the civilian aircraft industry has played a key role in recent engine developments, here again, government investments in basic technology research produced many of the most important intellectual building blocks. The basic technology underpinnings for FACTS technology, fuel cells, and photovoltaics also did not come from research supported by the power industry. These technologies are the outgrowth of developments in sectors such as the civilian space program, intelligence, and defense.

Although one can point to external contributions of basic technology knowledge that have benefited the electric power sector, their overall impact has been, and is likely to continue to be, more modest than the analogous developments in telecommunications. Nor are external forces driving investments in basic power technology research to the same degree as in telecommunications. The communications industry can count on a continuing flood of new and ever better and cheaper technologies that pour into its design engineers as the result of research activities in other industrial sectors and government R&D programs. At the moment, despite a few hopeful signs such as recent DARPA interest in power electronics, the electric power industry does not enjoy the same situation.

All players in the networked energy industry should be required to make investments in basic technology research.

Within the power industry, neither the electric equipment suppliers nor traditional power companies can be expected to support significant investments in basic technology research in the next few years. From 1995 to 1996, the electric and gas industry reduced private R&D funding in absolute terms and cut basic research by two-thirds. Of course, many of these companies may increase their investments in short-term applied research to gain commercial advantages in emerging energy markets. Indeed, from 1995 to 1996, dollars spent by private gas and electric firms on development projects increased in absolute terms. In the face of competitive threats from new power producers, traditional power companies understandably have shortened their time horizons and increased their focus on short-term issues of efficiency and cost control. Similarly, most equipment manufacturers are concerned principally with the enormous current demand to build traditional power systems all over the industrializing world. Future markets offered by changes occurring in developed-world power systems lie too far in the future to command much attention.

Putting all these pieces together, the result is that current investments in basic technology research related to electric power and more generally to networked energy systems are modest. Without policy intervention, they are likely to stay that way.

Need for research

What difference does it make if a future technological revolution in electric power gets postponed a for few decades because we are not making sufficient investments in basic technology research today to fuel such a revolution? We think it matters for at least three reasons.

First, there is opportunity cost. The world is becoming more electrified. Once energy has been converted to electricity, it is clean, easier to control, easier to use efficiently, and in most cases, safer. An important contributor to this process is the growing numbers of products and systems controlled by computers, which require reliable high-quality electricity. A delay in the introduction of technologies that can make the production of electricity cheaper, cleaner, more efficient, and more reliable as well as make its control much easier will cost the United States and the world billions of dollars that might otherwise be invested in other productive activities.

Second, there are environmental externalities. Thanks to traditional environmental regulation, the developed world now produces electricity with far lower levels of sulfur and nitrogen emissions, fewer particulates, and lower levels of other externalities than in the past. But the production of electric power still imposes large environmental burdens, especially in developing countries. The threat of climate change may soon add a need to control emissions of carbon dioxide and other greenhouse gases. Eventually we may have to dramatically reduce the combustion of fossil fuels and make a transition to a sustainable energy system that produces energy with far fewer environmental externalities and uses that energy far more efficiently. This will not happen at reasonable prices and without massive economic dislocations unless we dramatically increase the level of investment in energy-related basic technology research, so that when the time comes to make the change, the market will have the intellectual building blocks needed to do it easily and at a modest cost.

Third, there can be costs from suboptimal path dependencies. Will current and planned capital, institutional, and regulatory structures facilitate or impede the introduction of new technologies? System and policy studies of these questions are not likely to be very expensive. But because there may be strong path-dependent features to the evolution of the networked energy system, without careful long-term assessment and informed public policy, the United States could easily find itself frozen into suboptimal technological and organizational arrangements. This, in turn, could significantly constrain technological options in other electricity-using industries.

Mechanisms for research

The most common traditional policy tool for supporting a public good such as energy-related basic technology and environmental research has been direct government expenditure. But in the case of energy, the system has serious structural problems that are not easily rectified. DOE is the largest government funder of energy research. However, most of DOE’s energy budget is more applied in its orientation than the program we are proposing. The DOE basic research program is modest in scale, and for historical reasons much of it does not address topics that are likely to be on the critical path for the future revolution in energy technology. The National Science Foundation (NSF) supports only a few million dollars per year of basic technology research that is directly relevant to power systems.

DOE’s budget is subject to the usual vagaries of interest group politics, which makes it difficult to provide sustained support for basic technology research programs. Support for research in areas with a long-term focus and a broad distribution of benefits is particularly at risk. Although DOE has emphasized the important and unique role it plays in funding such research, and in some instances has a track record of protecting such work, it must carefully pick and choose what to support among competing areas of basic work. Recent pressures on the discretionary budget have further reduced the agency’s ability to sustain a substantive portfolio of basic research, because such programs compete under a single funding cap with stewardship of the nation’s atomic warheads, cleanup of lands contaminated by the weapons program, and programs in applied energy research and demonstration .

The President’s Council on Science and Technology concluded in its 1997 report that the United States substantially underinvests in R&D, observing that: “Scientific and technological progress, achieved through R&D, is crucial to minimizing current and future difficulties associated with . . . interactions between energy and well-being. . . . If the pace of such progress is not sufficient, the future will be less prosperous economically, more afflicted environmentally, and more burdened with conflict than most people expect. And if the pace of progress is sufficient elsewhere but not in the United States, this country’s position of scientific and technological leadership-and with it much of the basis of our economic competitiveness, our military security, and our leadership in world affairs-will be compromised.”

President Clinton’s FY99 request for energy R&D was approximately 25 percent above the funding levels for FY97 and FY98. However, much of the focus continued to be on applied technology development and demonstration projects incorporating current technological capabilities, with relatively modest investments planned in energy-related basic technology research. Congressional reaction has not been favorable.

Given the difficulty that the United States has had in carrying out a significant investment in basic energy-related and environmental technology research as part of the general federal discretionary budget and the obstacles to realigning agency agendas, we believe that strategies that facilitate collaborative nongovernmental approaches hold greater promise. Properly designed, they may also be able to shape and multiply federally supported R&D.

Several mutually compatible strategies hold promise. The first is a tax credit for basic energy technology and related environmental research. Proposals now being discussed in Congress would modify the tax code to establish a tax credit of at least 20 percent for corporate support of R&D at qualified research consortia such as EPRI and the Gas Research Institute (GRI). These proposals are designed to create an incentive for private firms to voluntarily support collaborative research with broad public benefits where the benefits and costs are shared equitably by members of the nonprofit research consortium, where there is not private capture of these benefits, and where the results of the research must be public.

Although such a change in the tax code will help, it is unlikely to be sufficient to secure the needed research investment. For this reason, we believe that new legal arrangements should be developed that require all players in the networked energy industry to make investments in basic technology research as a standard cost of doing business. Why single out the energy industry? Because, as we argued above, it is critical to the nation’s future well-being and, in contrast with other key sectors, enjoys fewer spillovers from other programs of basic technology research.

A new mandate for investment in federal technology research could be imposed legislatively on all market participants in networked energy industries, including electricity and gas. It should be designed to allocate most of the money through nongovernmental organizations without ever passing through the U.S. Treasury. For example, market participants could satisfy the mandate through participation in nonprofit collaborative research organizations such as EPRI and GRI. Other collaborative research organizations, similar to some of those that have been created by the electronics, computer, and communications industries, might be established for other market participants to fund research at universities and nonprofit laboratories.

The long-term public interest focus of such research would be ensured by requiring programs to meet some very simple criteria for eligibility, set forth in statutes. Industry participants should be given considerable discretion as to where they make their research investments. In most cases, they would probably choose to invest in organizations that already are part of the existing R&D infrastructure. Firms that did not want to be bothered with selecting a research investment portfolio could make their investment through a fund to be allocated to basic technology and environmental research programs at DOE, NSF, and the Environmental Protection Agency (EPA). Because of the long-term precompetitive nature of the mandated research investment, it is unlikely to supplant much if any of firms’ existing research.

To the extent possible, the mandated research investment should be designed to be competitively neutral. The requirement to make such investments should be assigned to suppliers of the commodity product (such as electricity or natural gas) and to providers of delivery services (such as transmission companies and gas transportation companies), so that both sets of players (and through them, the consumers of their products) are involved in funding the national technology research enterprise. Because the required minimum level of investment would be very small relative to the delivered product price [a charge of a 0.033 cents per kilowatt hour (kwh)-less than 0.5 percent of the average delivered price of electricity-would generate about a billion dollars per year], it is not likely to lead to distortions among networked and non-networked energy prices.

A presidentially appointed board of technical experts drawn from a wide cross-section of fields, not just the energy sector, should oversee the program’s implementation, establish criteria for eligibility, and oversee operation. Strategies will have to be developed for modest auditing and other oversight. Some lessons may be drawn from past Internal Revenue Service audit experience, but some new procedures will probably also be required. Membership in the board could be based on recommendations from the secretary of energy, the EPA administrator, the president’s science advisor, the NSF director, and the National Association of Regulatory Utility Commissioners. To avoid the creation of a new federal agency, the board should receive administrative and other staff support from an existing federal R&D agency such as NSF.

Our proposal extends, and we believe improves on, the public interest research part of “wires charge” proposals that are now being actively discussed among players in the public debate about electric industry restructuring. Such a non-bypassable charge, paid by parties who transport electricity over the grid, is typically discussed as a source of support for a variety of public benefit programs, including subsidies for low-income customers, energy efficiency programs, environmental projects, and sometimes also research. A number of states are already implementing such charges or are contemplating implementation.

For example, California’s new electric industry restructuring law has provided for about $62 million to be collected per year for four years through a charge assessed on customers’ electricity consumption. The purpose of this charge is to support public interest R&D. Funds are being spent primarily on R&D projects designed to show short-term results, in part to provide data by the time that the four-year program is reviewed for possible extension. New York is considering a charge to collect $11 million over the next three years to fund renewable R&D. Massachusetts has adopted a mechanism to fund renewable energy development, with a charge based on consumption that will begin at 0.075 cents/kwh in 1998 and grow to 0.125 cents/kwh in 2002. This charge is expected to generate between $26 million and $53 million per year over time for activities to promote renewable energy in the state, including some R&D, as well as to support the commercialization and financing of specific power projects.

In 1997, state regulators passed resolutions urging Congress to consider-and EPRI, GRI, and their constituents to develop-a variety of new mechanisms, including taxes, tax credits, and a broad-based, competitively neutral funding mechanism, to support state and utility public benefits programs in R&D, in addition to energy efficiency, renewable energy technologies, and low-income assistance. Several restructuring proposals in Congress, including the president’s proposed comprehensive electricity competition plan, include a wires charge. The president’s program would create a $3-billion-per-year public benefit program for low-income assistance, energy efficiency programs, consumer education, and development and demonstration of emerging technologies, especially renewable resources. Basic technology research is not mentioned. The president’s plan, which would cap wires charges at one-tenth of a cent per kwh on all electricity transmitted over the grid, would be a matching program for states that also establish a wires charge for public benefit programs.

There are two serious problems with state-level research programs based on wires charges. For political reasons, their focus is likely to be short-term and applied, and they are likely to result in serious balkanization of the research effort. Balkanization will result because most state entities will find themselves under political pressure to invest in programs within the state. This will make it difficult or impossible to support concentrated efforts at a few top-flight organizations. Many of the issues that need to be addressed simply cannot be studied with a large number of small distributed efforts.

New carbon dioxide control instruments, now being considered as a result of growing concerns about possible climate change, offer another opportunity to produce resources for investment in a mandated program of basic energy technology research. Carbon emission taxes or a system of caps and tradable emission permits are the two policy tools most frequently proposed for achieving serious reductions in future carbon dioxide emissions. Over time, both are likely to involve large sums of money. Following the model outlined above, a mandate could require that a small portion of that money be invested in basic technology research. For example, in a cap and trade system, permit holders might be required to make small basic technology research investments in lieu of a “lease” fee in order to hold their permit or keep it from shrinking.

Although the mechanisms we have proposed to support basic technology and environmental research are different, they are all intended to be competitively neutral in the marketplace, national in scope, and large enough to fund a portfolio of basic technology research at a level of at least a billion dollars per year to complement and support other more applied research that can be expected to continue as the industry restructures. With the implementation of such a set of programs, the United States would take a big step toward ensuring that we, our children, and their children will be able to enjoy the benefits of clean, abundant, flexible, low-cost energy throughout the coming century.

Collaborative R&D: How Effective Is It?

R&D collaboration is widespread in the U.S. economy of the 1990s. Literally hundreds of agreements now link the R&D efforts of U.S. firms, and other collaborative agreements involve both U.S. and non-U.S. firms. Collaboration between U.S. universities and industry also has grown significantly since the early 1980s-hundreds of industry-university research centers have been established, and industry’s share of U.S. university research funding has doubled during this period, albeit to a relatively modest 7 percent. Collaboration between industrial firms and the U.S. national laboratories has grown as well during this period, with the negotiation of hundreds of agreements for cooperative R&D.

R&D collaboration has been widely touted as a new phenomenon and a potent means to enhance economic returns from public R&D programs and improve U.S. industrial competitiveness. In fact, collaborative R&D projects have a long history in U.S. science and technology policy. Major collaborative initiatives in pharmaceuticals manufacture, petrochemicals, synthetic rubber, and atomic weapons were launched during World War II, and the National Advisory Committee on Aeronautics, founded in 1915 and absorbed into NASA in 1958, made important contributions to commercial and military aircraft design throughout its existence. Similarly, university-industry research collaboration was well established in the U.S. economy of the 1920s and 1930s and contributed to the development of the academic discipline of chemical engineering, transforming the U.S. chemicals industry.

A single minded industry “vision” can conserve resources, but it may be ill advised in the earliest stages of development.

There is no doubt that collaborative R&D has made and will continue to make important contributions to the technological and economic well-being of U.S. citizens. But in considering the roles and contributions of collaboration, we must focus on the objectives of collaborative programs, rather than treating R&D collaboration as a “good thing” in and of itself. Collaborative R&D can yield positive payoffs, but it is not without risks. Moreover, R&D collaboration covers a diverse array programs, projects, and institutional actors. No single recipe for project design, program policies, or evaluation applies to all of these disparate entities.

In short, R&D collaboration is a means, not an end. Moreover, the dearth of systematic analysis and evaluation of existing federal policies toward collaboration hampers efforts to match the design of collaborative programs to the needs of different firms, industries, or sectors. A review of U.S. experience reveals a number of useful lessons and highlights several areas where more study is needed.

Policy evolution

Since the mid-1970s, federal policy has encouraged collaboration among many different institutional actors in the U.S. R&D system. One of the earliest initiatives in this area was the University-Industry Cooperative Research program of the National Science Foundation (NSF), which began in the 1970s to provide partial funding to university research programs enlisting industrial firms as participants in collaborative research activities. The NSF efforts were expanded during the 1980s to support the creation of Engineering Research Centers, and other NSF programs now encourage financial contributions from industry as a condition for awarding research funds to academic institutions. Moreover, the NSF model has been emulated by other federal agencies in requiring greater cost-sharing from institutional or industry sources in competitive research grant programs. The NSF and other federal initiatives were associated with the establishment of more than 500 university-industry research centers during the 1980s.

R&D collaboration between industrial firms and universities received another impetus from the Bayh-Dole Act, passed in 1980 and amended in 1986, which rationalized and simplified federal policy toward the patenting and licensing by nonprofit institutions of the results of publicly funded research. The Bayh-Dole Act has been credited with significant expansion in the number of universities operating offices to support the patenting, licensing, and transfer to industrial firms of university research results. These offices and the legislation have also provided incentives for industrial firms to form collaborative R&D relationships with universities.

The Bayh-Dole Act, the Stevenson-Wydler Act of 1980, and the Technology Transfer Act of 1986 also created new mechanisms for R&D collaboration between industrial firms and federal laboratories through the mechanism of the Cooperative Research and Development Agreement (CRADA). Under the terms of a CRADA, federal laboratories are empowered to cooperate in R&D with private firms and may assign private firms the rights to any intellectual property resulting from the joint work; the federal government retains a nonexclusive license to the intellectual property. The XXXXXX XXXXXX Act [Which Act?] was amended in 1989 to allow contractor-operated federal laboratories to participate in CRADAs. Federal agencies and research laboratories have signed hundreds of CRADAs since the late 1980s; between 1989 and 1995, the Department of Energy (DOE) alone signed more than 1,000 CRADAs. The 1996 Technology Transfer Improvements and Advancement Act strengthened the rights of industrial firms to exclusively license patents resulting from CRADAs.

Federal antitrust policy toward collaborative R&D also was revised considerably during the early 1980s. Through much of the 1960s and 1970s, federal antitrust policy was hostile toward R&D collaboration among industrial firms. The Carter administration’s review of federal policies toward industrial innovation resulted in a new enforcement posture by the Justice Department, embodied in guidelines issued in 1980 that were less hostile toward such collaboration. In 1984, the passage of the National Cooperative Research Act (NCRA) created a statutory “safe harbor” from treble damages in private antitrust suits for firms registering their collaborative ventures with the Justice Department. The NCRA was amended to incorporate collaborative ventures in production in 1993. During the period from 1985 through 1994, U.S. firms formed 575 “research joint ventures,” the majority of which focused on process R&D. Interestingly, Justice Department data on filings under the NCRA since the passage of the 1993 amendments report the formation of only three joint production ventures.

Finally, the federal government began under the Reagan administration to provide financial support to R&D consortia in selected technologies and industries. The most celebrated example of this policy shift is SEMATECH, the semiconductor industry R&D consortium established in 1987 with funding from the federal government (until 1996), industry, and the state of Texas. Since 1987, the Advanced Technology Program, established under the Bush administration, has provided matching funds for a number of industry-led R&D consortia, some of which involve universities or federal laboratories as participants. More recent programs such as the Technology Reinvestment Program and the Project for a New Generation of Vehicles have drawn on funding from other federal agencies to supplement industry financial contributions for the support of industry-led R&D consortia.

Although the federal policy has shifted dramatically in the past 20 years and spawned a diverse array of collaborative arrangements, surprisingly little effort has been devoted to evaluation of any one of the legislative or administrative initiatives noted above. For example, how should one interpret the evidence on the small number of production joint ventures filed with the Justice Department since 1993? A broader assessment of the consistency and effects of these policies as a whole is needed. Recognizing the number of such initiatives implemented in a relatively short period of time, their occasionally inconsistent structure, and their potentially far-reaching effects, this comprehensive assessment should precede additional legislation or other policy initiatives.

Benefits and risks

A brief discussion of the potential benefits and risks of R&D collaboration is useful to assess the design and implementation of specific collaborative programs. The economics literature identifies three broad classes of benefits from R&D collaboration among industrial firms: (1) enabling member firms to capture “knowledge spillovers” that otherwise are lost to the firm investing in the R&D that gives rise to them, (2) reducing duplication among member firms’ R&D investments, and (3) supporting the exploitation of scale economies in R&D. This group of (theoretical) benefits has been supplemented by others in more recent discussions of policy that often address other forms of collaboration: (1) accelerating the commercialization of new technologies, (2) facilitating and accelerating the transfer of research results from universities or public laboratories to industry, (3) supporting access by industrial firms to the R&D capabilities of federal research facilities, and (4) supporting the creation of a common technological “vision” within an industry that can guide R&D and related investments by public and private entities.

This is a long list of goals for any policy instrument. Moreover, many of these goals deal with issues of technology development and commercialization rather than scientific research. Although a sharp separation between scientific research and technology development is unwarranted on empirical and conceptual grounds, the fact remains that collaboration in “R” raises different issues and poses different challenges than does collaboration in “D” or in R&D.

Broad patents and restrictive licenses on publicly funded collaborative R&D should be discouraged.

The benefits of collaborative R&D that economists have cited in theoretical work are difficult to measure. More important, however, they imply guidelines for the design of R&D collaboration that may conflict with other goals of public R&D policy. The hypothesized ability of industry-led consortia to internalize knowledge spillovers, for example, is one reason to expect them to support more fundamental, long-range research. Nonetheless, most industry-led consortia, including SEMATECH, support R&D with a relatively short time horizon of three to five years. In addition, most industry-led R&D consortia seek to protect jointly created intellectual property. Yet protection of the results of collaborative R&D may limit the broader diffusion and exploitation of these results that would increase the social returns from these investments. When industry-led consortia receive public financial support, this dilemma is sharper still.

A similar tension may appear in collaborations between U.S. universities and industrial firms, especially those centered around the licensing of university research results. In fact, university research has long been transferred to industrial enterprises through a large number of mechanisms, including the training of graduates, publication of scientific papers, faculty consulting, and faculty-founded startup firms. Efforts by universities to obtain strong formal protection of this intellectual property or restrictive licensing terms may reduce knowledge transfer from the university, with potentially serious economic consequences. There is no compelling evidence of such effects as yet, but detailed study of this issue has only begun.

Reduced duplication among the R&D strategies of member firms in consortia and other forms of R&D collaboration is another theoretical benefit that may be overstated. The experience of participants in industry-led consortia, collaborations between federal laboratories and industry, and university-industry collaborations all suggest that some intrafirm R&D investment is essential if the results of the R&D performed in the collaborative venue are to be absorbed and applied by participating firms. In other words, some level of in-house duplication of the R&D performed externally is necessary to realize the returns from collaborative R&D.

The other goals of R&D collaboration that are noted above raise difficult issues. For example, the reduction of duplicative R&D programs within collaborating firms and the development by an industry of a common technological vision both imply some reduction in the diversity of scientific or technological avenues explored by research performers. Since one of the hallmarks of technology development, especially in its earliest stages, is pervasive uncertainty about future developments, the elimination of such diversity introduces some risk of collective myopia. One may overlook promising avenues for future research or even bypass opportunities for commercial technology development. A single-minded industry vision can conserve resources, but it may be risky or even ill-advised when one is in the earliest stages of development of a new area of science or technology. After all, the postwar United States has been effective in spawning new technology-intensive industries precisely because of the ability of the U.S. market and financial system to support the exploration of many competing, and often conflicting, views of the likely future path of development of breakthroughs such as the integrated circuit, the laser, or recombinant DNA techniques.

Managing R&D collaboration between industrial firms and universities or federal laboratories is difficult, and problems of implementation and management frequently hamper the realization of other goals of such collaboration. Collaborative R&D may accelerate the transfer of research results from these public R&D performers to industry, but the devil is in the details. The sheer complexity of the management requirements for R&D collaborations, especially those involving many firms and more than one university or laboratory, may slow technology transfer. In addition, the costs of such transfer-including the maintenance by participating firms of parallel R&D efforts in-house and/or the rotation of staff to an offsite R&D facility-may exceed the resources of smaller firms. In some cases, the effectiveness of CRADAs between federal laboratories and university-industry collaborations has been impeded by negotiations over intellectual property rights, regardless of the actual importance of such rights, in order to conform with the statutory and administrative requirements of such collaborations.

A beginning at differentiation

At the risk of oversimplifying a very complex phenomenon, one can single out three categories of R&D collaboration as especially important: (1) industry-led consortia, which may or may not receive public funds; (2) collaborations between universities and industry; and (3) collaborations between industry and federal laboratories, often supported through CRADAs. These forms of collaboration have received direct encouragement, and in some cases financial support, from federal policy in the past 20 years. In addition to the variety of collaborative mechanisms, there is considerable variation among technology classes in the types of policies or organizational structures that will support effective R&D performance and dissemination.

Industry-led consortia. As noted earlier, these undertakings rarely focus on long-range research. Indeed, many consortia in the United States pursue activities that more closely resemble technology adoption than technology creation. SEMATECH, for example, has devoted considerable effort to the development of performance standards for new manufacturing equipment. These efforts are hardly long-range R&D, but they can aid equipment firms’ sales of these products and SEMATECH members’ adoption of new manufacturing technologies. Industry consortia also do not eliminate duplication in the R&D programs of participants because of the requirements for in-house investments in R&D and related activities to support inward transfer and application of collaborative R&D results. The need for these investments means that small firms may find it difficult to exploit the results of consortia R&D, and particular attention must be devoted to their needs. Consortia may aid in the formation of an industry-wide vision of future directions for technological innovation, but such consensus views are not always reliable, especially when technologies are relatively immature and the direction of their future development highly uncertain. Such visions can be overtaken by unexpected scientific or technological developments.

Some of these features of “best practice” that have been identified with the SEMATECH experience, especially the need for flexibility in agenda-setting and adaptation, may be difficult to reconcile with the requirements of public oversight and evaluation of publicly funded programs. Moreover, the SEMATECH experience suggests that collaborative R&D alone is insufficient to overcome weaknesses in manufacturing quality, marketing, or other aspects of management. Indeed, in its efforts to strengthen smaller equipment suppliers, SEMATECH supplemented R&D with outreach and education (mainly in the equipment and materials industries) in areas such as quality management and financial management.

Effective industry university relationships differ considerably among different industries, academic disciplines, and research areas.

University-industry collaborations. Collaborative research involving industry and universities has a long history. A combination of growing R&D costs within academia and industry, along with the supportive federal legislation and policy shifts described above, have given considerable impetus to university-industry collaboration during the past 20 years. Industry now accounts for roughly 7 percent of academic R&D spending in the United States, the number of university-industry research centers has grown, and university patenting and licensing have expanded significantly since 1980. As in the case of SEMATECH, recent experience supports several observations about the effectiveness of these collaborations for industrial, academic, and national goals and welfare:

Little evidence is available about the ability of these collaborative R&D ventures to support long-term research. Cohen et al. (1994) found that most university-industry engineering research centers tended to focus on relatively near-term research problems and issues faced by industry. Other undertakings, however, such as the MARCO initiative sponsored by SEMATECH, are intended to underwrite long-range R&D efforts. University-industry collaboration thus may be able to support long-range R&D more effectively than industry-led consortia.

Preliminary evidence indicates that the Bayh-Dole Act has had little effect on the characteristics of the invention disclosures from faculty. Bayh-Dole did cause many other universities to enter into patenting and licensing activities. In addition, data from the University of California, which was active in patenting and licensing before the passage of the bill, suggest that the number of annual invention disclosures began to grow more rapidly and shifted to include a larger proportion of biomedical inventions before, rather than after, the passage of this law. These findings are preliminary, however, and a broader evaluation of the effects of the Bayh-Dole Act is long overdue.

Effective industry-university relationships differ considerably among different industries, academic disciplines, and research areas. In biomedical research, for example, individual patents have considerable strength and therefore potentially great commercial value. Licensing relationships covering intellectual property “deliverables” thus have been quite effective. In other areas, however, such as chemical engineering or semiconductors, the goals of industry-university collaborations, and the vehicles that are best suited to their support, differ considerably. Firms in these industries often are less concerned with obtaining title to specific pieces of intellectual property than with seeking “windows” on new developments at the scientific frontier and access to high-quality graduates (who are themselves effective vehicles for the transfer of academic research results to industry). For firms with these objectives, extensive requirements for specification and negotiation of the disposition of intellectual property rights from collaborative research may impede such collaboration. The design of university-industry relationships should be responsive to such differences among fields of research.

Excessive emphasis on the protection by universities of the intellectual property resulting from collaborative ventures, especially when combined with restrictive licensing terms, may have a chilling effect on other channels of transfer, restricting the diffusion of research results and conceivably reducing the social returns from university research. Unbalanced policies, such as restrictions on publication, raise particular dangers for graduate education, which is a central mission of the modern university and an important channel for university-industry interaction and technology transfer.

Management of industry-university relationships should be informed by more realistic expectations among both industry executives and university administrators on means and ends. In many cases, universities may be better advised to focus their management of such relationships and any associated intellectual property on the establishment or strengthening of research relationships, rather than attempting to maximize licensing and royalty income.

As is true of industry-led consortia, industrial participants in collaborative R&D projects with universities must invest in mechanisms to support the inward transfer and absorption of R&D results. The requirements for such absorptive capacity mean that university-industry collaborations may prove less beneficial or feasible for small firms with insufficient internal resources to undertake such investments.

Collaborations between federal laboratories and industry. Our recent examination of a small sample of CRADAs between a large DOE nuclear weapons laboratory and a diverse group of industrial firms suggests the following preliminary observations concerning this type of R&D collaboration:

Cultural differences matter. All of the firms participating in these CRADAs agreed that this DOE laboratory had unique capabilities, facilities, and equipment that in many cases could not be duplicated elsewhere. Nevertheless, their contrasting backgrounds meant that laboratory and firm researchers had different approaches to project management that occasionally conflicted. Moreover, the limited familiarity of many laboratory personnel with the needs of potential commercial users of these firms’ technologies meant that collaboration in areas distant from the laboratory’s historic missions was more difficult and often less successful.

The focus of many CRADAs on specification of intellectual property (IP) rights often served as an obstacle to the timely negotiation of the terms of these ventures. In a majority of the cases we reviewed, the participating firms were not particularly interested in patenting the results of their projects. The importance of formal IP rights differs among technological fields, but the emphasis in many CRADAs on intellectual property rights may be misplaced, and alternative vehicles for collaboration may be better suited to the support of such collaboration. As with university-industry collaboration, no single instrument will serve to support collaboration in all technologies or research fields. Laboratory and firm management needs to devote more effort to selecting projects for collaboration and must improve the fit between the project and the specific vehicle for such collaboration.

Most of the CRADAs reviewed in our study were concerned with near-term R&D or technology development. Participating firms frequently found it difficult to manage the transition from development to production without some continuing support from DOE personnel. Yet the terms of many of these CRADAs made a more gradual handoff very difficult.

As in other types of R&D collaboration, significant investments by participating firms to support inward transfer and application of the results of CRADAs were indispensable. Firms that found CRADAs to be especially beneficial had invested heavily in this relationship, including significant personnel rotation, travel, and communications. Along with the small size of their budgets, the costs of these investments made CRADAs involving small firms difficult to manage.

Who pays?

The case for public funding of collaborative R&D resembles the case for public funding of R&D more generally. This case is strongest where there is a high social return from collaborative R&D activities, and the gap between private and social returns is such that without public funding, the work would not be undertaken. But these arguments for public funding of collaborative R&D raise two important challenges to the design of such projects:

What is the appropriate “match” between public and private funding? A matching requirement creates incentives for participating firms to minimize costs and apply the results of such R&D. Setting a matching requirement at a very low share of total program costs may weaken such incentives and result in R&D that is of little relevance to an industry’s competitive challenges. However, if the private matching requirement is set at a relatively high level (for example, above 75 percent of total program costs), firms may choose not to participate in collaborative R&D or will undertake projects that would have been launched in any event. The ideal matching requirement will balance these competing objectives, but there is little guidance from economic theory or prior experience to inform such a choice.

If R&D collaboration seeks to encourage research investments yielding high social returns, the case for tight controls on the dissemination of the results of such R&D is weak. The assignment to private firms of intellectual property rights to the results of such R&D that is allowed by the Bayh-Dole Act and other policies is intended to encourage the commercialization of these results by establishing a stronger reason for their owners to undertake such investments. But by limiting the access of other firms to these results, patents or restrictive licenses may slow the diffusion of R&D results, reducing the social returns from the publicly funded R&D. This dilemma is another one for which neither economic theory nor program experience provides much guidance. As a general rule, however, broad patents and restrictive licensing terms for patents resulting from publicly funded collaborative R&D should be discouraged. This policy recommendation suggests that the competitive effects of any greater tilt toward exclusivity in the licensing of these patents, such as that embodied in the Technology Transfer Improvements and Advancement Act, should be monitored carefully.

These dilemmas apply to public funding of R&D, especially civilian R&D performed within industry, regardless of whether R&D collaboration is involved. The mere presence of a collaborative relationship does not eliminate them, and in some cases may complicate their resolution.

The “taxonomy” of R&D collaborations discussed earlier is hardly exhaustive, but it suggests the need for a clearer assessment of the links between the goals of R&D collaborations and their design. For example, R&D collaborations established to support long-range R&D may be more effective if they link universities and industry, rather than being undertaken through industry-led consortia. At the same time, the effects of collaboration on the other missions of U.S. universities must be monitored carefully so as not to undercut performance in these areas. Small firms often face serious problems with R&D collaboration, because of the significant investments that participants must make in technology absorption and the inability of R&D collaboration to upgrade technological capabilities in firms lacking them. In addition, small firms often need much more than technological assistance alone in order to improve their competitive performance. R&D collaborations that seek to accelerate technology access and transfer must be designed to avoid administrative requirements that may instead slow these activities. In particular, negotiations over intellectual property rights must be handled flexibly and in a manner that is responsive to the needs of all the participants.

The variations among different types of R&D collaboration are substantial, and policymakers and managers alike should proceed with great caution in reaching sweeping conclusions or in developing detailed policies that seek to govern collaboration in all institutional venues, technologies, and industries. Broad guidelines are appropriate and consistent with Congress’s role in ensuring that these undertakings serve the public interest. But the implementation of these guidelines and detailed policies governing R&D collaborations are best left to the agencies and institutions directly concerned with this activity. Greater flexibility for federal agencies in negotiating the terms of CRADAs within relatively broad guidelines, for example, would facilitate their more effective use and more careful consideration of alternatives to these instruments for collaboration.

The phenomenon of R&D collaboration has grown so rapidly that hard facts and robust generalizations about best practice and policy are exceedingly difficult to develop for all circumstances. A more comprehensive effort to collect data on R&D collaboration, perhaps spearheaded by the Commerce Department’s Technology Administration, and greater efforts to capture and learn from the results of such ventures surely are one of the most urgent prerequisites to any effort to formulate a broader policy on R&D collaboration.

Critical Infrastructure: Interlinked and Vulnerable

The infrastructure of the United States-the foundations on which the nation is built-is a complex system of interrelated elements. Those elements-transportation, electric power, financial institutions, communications systems, and oil and gas supply-reach into every aspect of society. Some are so critical that if they were incapacitated or destroyed, an entire region, if not the nation itself, could be debilitated. Continued operation of these systems is vital to the security and well-being of the country.

Once these systems were fairly independent. Today they are increasingly linked and automated, and the advances enabling them to function in this manner have created new vulnerabilities. What in the past would have been an isolated failure caused by human error, malicious deeds, equipment malfunction, or the weather, could today result in widespread disruption.

A presidential commission concluded that the nation’s infrastructure is at serious risk and the capability to do harm is readily available.

Among certain elements of the infrastructure (for example, the telecommunications and financial networks), the degree of interdependency is especially strong. But they all depend upon each other to varying degrees. We can no longer regard these complex operating systems as independent entities. Together they form a vast, vital-and vulnerable-system of systems.

The elements of infrastructure themselves are vulnerable to physical and electronic disruptions, and a dysfunction in any one may produce consequences in the others. Some recent examples:

  • The western states power outage of 1996. One small predictable accident of nature-a power line shorting after it sagged onto a tree-cascaded into massive unforeseen consequences: a power-grid collapse that persisted for six hours and very nearly brought down telecommunications networks as well. The system was unable to respond quickly enough to prevent the regional blackout, and it is not clear whether measures have been taken to prevent another such event.
  • The Northridge, California, earthquake of January 1994 affecting Los Angeles. First-response emergency personnel were unable to communicate effectively because private citizens were using cell phones so extensively that they paralyzed emergency communications.
  • Two major failures of AT&T communications systems in New York in 1991. The first, in January, created numerous problems, including airline flight delays of several hours, and was caused by a severed high-capacity telephone cable. The second, in September, disrupted long distance calls, caused financial markets to close and planes to be grounded, and was caused by a faulty communications switch.
  • The satellite malfunction of May 1998. A communications satellite lost track of Earth and cut off service to nearly 90 percent of the nation’s approximately 45 million pagers, which not only affected ordinary business transactions but also physicians, law enforcement officials, and others who provide vital services. It took nearly a week to restore the system.

Failures such as these have many harmful consequences. Some are obvious, but others are subtle-for example, the loss of public confidence that results when people are unable to reach a physician, call the police, contact family members in an emergency, or use an ATM to get cash.

The frequency of such incidents and the severity of their impact are increasing, in part because of vulnerabilities that exist in the nation’s information infrastructure. John Deutch, then director of the CIA, told Congress in 1997 that he ranked information warfare as the second most serious threat to U.S. national security, just below weapons of mass destruction in terrorist hands. Accounts of hacking into the Pentagon’s computers and breakdowns of satellite communications have been reported in the press. These incidents suggest wider implications for similar systems.

Two major issues confront the nation as we consider how best to protect critical elements of the infrastructure. The first is the need to define the roles of the public and private sectors and to develop a plan for sharing responsibility between them. The second is the need to understand how each system in the infrastructure functions and how it affects the others so that its interdependencies can be studied. Both issues involve a multitude of considerations.

Dire warning

In 1996, the Presidential Commission on Critical Infrastructure Protection was established. It included officials concerned with the operation and protection of the nation and involved in energy, defense, commerce, the CIA and the FBI, as well as 15 people from the private sector. The commission conducted a 15-month study of how each element of the infrastructure operates, how it might be vulnerable to failures, and how it might affect the others. Among its conclusions: 1) the infrastructure is at serious risk, and the capability to do harm is readily available; 2) there is no warning system to protect the infrastructure from a concerted attack; 3) government and industry do not efficiently share information that might give warning of an electronic attack; and 4) federal R&D budgets do not include the study of threats to the component systems in the infrastructure. (Information on the commission, its members, its tasks, and its goals, as well as the text of the presidential directive, are available on the Web at http://www.pccip.gov.)

The primary focus of industry government cooperation should be to share information and techniques related to risk management assessments.

A major question that faced the commission, and by implication the nation, is the extent to which the federal government should get involved in infrastructure protection and in establishing an indications and warning system. If the government is not involved, who will ensure that the interdependent systems function with the appropriate reliability for the national interest? There is at present no strategy to protect the interrelated aspects of the national infrastructure; indeed, there is no consensus on how its various elements actually mesh.

We believe that protecting the national infrastructure must be a key element of national security in the next few decades. There is obviously an urgent and growing need for a way to detect and warn of impending attacks on, and system failures within, critical elements of the national infrastructure. If we do not develop such an indications and warning capability, we will be exposed and easily threatened.

The presidential commission’s recommendations also resulted in the issuance on May 22, 1998, of Presidential Decision Directive 63 (PDD 63) on Critical Infrastructure Protection. PDD 63 establishes lines of responsibility within the federal government for protecting each of the infrastructure elements and for formulating an R&D strategy for improving the surety of the infrastructure.

PDD 63 has already triggered infrastructure-protection efforts by all federal agencies and departments. For example, not only is the Department of Energy (DOE) taking steps to protect its own critical infrastructure, but it is also developing a plan to protect the key components of the national energy infrastructure. Energy availability is vital to the operations of other systems. DOE will be studying the vulnerabilities of the nation’s electric, gas, and oil systems and trying to determine the minimum number of systems that must be able to continue operating under all conditions, as well as the actions needed to guarantee their operation.

Achieving public-private cooperation. A major issue in safeguarding the national infrastructure is the need for public-private cooperation. Private industry owns 85 percent of the national infrastructure, and the country’s economic well-being, national defense, and vital functions depend on the reliable operation of these systems.

Private industry’s investment in protecting the infrastructure can be justified only from a business perspective. Risk assessments will undoubtedly be performed to compare the cost of options for protection with the cost of the consequences of possible disruptions. For this reason, it is important that industry have all the information it needs to perform its risk assessments. The presidential commission reported that private owners and operators of the infrastructure need more information on threat and vulnerabilities.

Much of the information that industry needs may be available from the federal government, particularly from the law enforcement, defense, and intelligence communities. In addition, many government agencies have developed the technical skills and expertise required to identify, evaluate, and reduce vulnerabilities to electronic and physical threats. This suggests that the first and primary focus of industry-government cooperation should be to share information and techniques related to risk management assessments, including incident reports, identification of weak spots, plans and technology to prevent attacks and disruptions, and plans for how to recover from them.

Sharing information can help lessen damage and speed recovery of services. However, such sharing is difficult for many reasons. Barriers to collaboration include classified and secret materials, proprietary and competitively sensitive information, liability concerns, fear of regulation, and legal restrictions.

There are two cases in which the public and private sectors already share information successfully. The first is the collaboration between the private National Security Telecommunications Advisory Committee and the government’s National Communications System. The former comprises the leading U.S. telecommunications companies; the latter is a confederation of 23 federal government entities. The two groups are charged jointly with ensuring the robustness of the national telecommunications grid. They have been working together since 1984 and have developed the trust that allows them to share information about threats, vulnerabilities, operations, and incidents, which improves the overall surety of the telecommunications network. Their example could be followed in other infrastructure areas, such as electric power.

The second example of successful public-private cooperation is the federally run Centers for Disease Control’s (CDC’s) epidemiological databases. The CDC has over the years developed a system for acquiring medical data to analyze for the public good. The CDC collaborates with state agencies and responsible individuals to obtain information that has national importance. CDC obtains it as anonymous data, thus protecting the privacy of individual patients. The way CDC gathers, analyzes, and reports data involving an enormous number of variables from across the nation is a model for how modern information technology can be applied to fill a social need while minimizing harm to individuals. Especially relevant to information-sharing is the manner in which the CDC is able to eliminate identifiable personal information from databases, a concern when industry is being asked to supply the government with proprietary information.

The ultimate goal is to develop a real-time ability to share information on the current status of all systems in the infrastructure. It would permit analysis and assessment to determine whether certain elements were under attack. As the process of risk assessment and development of protection measures proceeds, a national center for analysis of such information should be in place and ready to facilitate cooperation between the private and public sectors. To achieve this goal, a new approach to government-industry partnerships will be needed.

Assessing system adequacy

We use the term “infrastructure surety” to describe the protection and operational assurance that is needed for the nation’s critical infrastructure. Surety is a term that has long been associated with complex high-consequence systems, such as nuclear systems, and it encompasses safety, security, reliability, integrity, and authentication, all of which are needed to ensure that systems are working as expected in any situation.

A review of possible analytical approaches to this surety problem suggests the need for the application of what is known as a consequence-based assessment in order to understand and manage critical elements of the systems. It begins by defining the consequences of disruptions, then by identifying critical nodes-elements that are so important that severe consequences would result if they could not operate. Finally, it outlines protection mechanisms and associated costs of protecting those nodes. This approach is used to assess the safety of nuclear power plants, and insurance companies use it in a variety of ways. It permits the costs and benefits of each protection option to be assessed realistically and is particularly attractive in situations in which the threat is difficult to quantify, because it allows the costs of disruptions to be defined independently of what causes the disruptions. Industry can then use these results in assessing risks. It provides a way for industry to establish a business case for protecting assets.

One area of particular concern, and one that must be faced in detail with private industry, is the widespread and increasing use of supervisory control and data acquisition systems-networks of information systems that interconnect the business, administrative, safety, and operational sections within an element of the infrastructure. The presidential commission identified these supervisory control systems as needing attention because they control the flow of electricity, oil and gas, and telecommunications throughout the country and are also vulnerable to electronic and physical threats. Because of its long-term involvement with complex and burgeoning computer networks, DOE could work with industry to develop standards and security methods for supervisory control and data acquisition protocols and to develop the means to monitor vital parts in the system.

The need for a warning center

The commission recognized the need for a national indications and warning capability to monitor the critical elements of the national infrastructure and determine when and if they are under attack or are the victim of destructive natural occurrences. It favors surveillance through a national indications and warning center, which would be operated by a new National Infrastructure Protection Center (NIPC). The center would be a follow-on to the Infrastructure Protection Task Force, headed by the FBI and created in 1996 It had representatives from the Justice, Transportation, Energy, Defense, and Treasury Departments, the CIA, FBI, Defense Information Systems Agency, National Security Agency, and National Communications System. The task force was charged with identifying and coordinating existing expertise and capabilities in the government and private sector as they relate to protecting the critical infrastructure from physical and electronic threats. A national center would receive and transmit data across the entire infrastructure, warning of impending attack or failure, providing for physical protection of a vital system or systems, and safeguarding other systems that might be affected. This would include a predictive capability. The center would also allow proprietary industry information to be protected.

Timely warning of attacks and system failures is a difficult technical and organizational challenge. The key remaining questions are 1) which data should be collected to provide the highest probability that impending attacks can be reliably predicted, sensed, and/or indicated to stakeholders? and 2) how can enormous volumes of data be efficiently and rapidly processed?

Securing the national infrastructure depends on understanding the relationships of its various elements. Computer models are an obvious choice to simulate interactions among infrastructure elements. One in particular is proving to be extremely effective for this kind of simulation. It is an approach in which interactions are modeled individually by computer programs called intelligent agents, one for each interaction. Each program is designed to represent an entity of some kind, such as a bank, an electrical utility, or a telecommunications company. These are allowed to interact. As they do so, they learn from their experience, alter their behavior, and interact differently in subsequent encounters, much as a person or company would do in the real world.

The behavior of the independent systems then becomes apparent. What this makes possible is a way to simulate a large number of possible situations and to analyze their consequences. One way to express the consequences of disruption is by analyzing the economic impact of an outage on a city, a region, and the nation. The agent-based approach can use thousands of agent programs to model very complex systems. In addition, the user can set up hypothetical situations (generally disruptive events, such as outages or hacking incidents) to determine system performance. In fact, the agent-based approach can model the effects of an upset to a system without ever knowing the exact nature of the upset. It offers advantages over traditional techniques for modeling the interdependencies of elements of the infrastructure, because this approach can use rich sources of micro-level data (demographics, for example) to develop forecasts of interactions, instead of using macro-scale information, such as flow models for electricity.

The agent-based approach can exploit the speed, performance, and memory of massively parallel computers to develop computer models as tools for security planning and for counterterrorism measures. It will allow critical nodes to be mapped and it provides a method to quantify the physical and economic consequences of large and small disruptions.

A few agent-based models are in existence. One example is ENERGY 2020 for the electric power and gas industries. ENERGY 2020 can be combined with a powerful commercially available economic model, such as Regional Economic Models, Inc., or Sandia’s ASPEN, which models the banking and finance infrastructure.

In conjunction with these agent-based models, multiregional models encompassing the entire U.S. economy can evaluate regional effects of national policies, events, or other changes. The multiregional approach incorporates key economic interactions among regions and allows for national variables to change as the net result of regional changes. It is based on the premise that national as well as local markets determine regional economic conditions, and it incorporates interactions among these markets. By ignoring regional markets and connections, other methods may not accurately account for regional effects or represent realistic national totals.

We must soon develop ways to detect and warn of impending attacks on and system failures within critical elements of the national infrastructure.

At Sandia, we modeled two scenarios, both involving the electricity supply to a major U.S. city. The first assumed a sequence of small disruptions over one year that resulted from destruction of electricity substations servicing a quarter of the metropolitan area. This series of small outages had the long-term effect of increasing labor and operating costs, and thus the cost of electricity, making the area less apt to expand economically and so less attractive to a labor force.

In the second scenario, a single series of short-lived and well-planned explosions destroyed key substations and then critical transmission lines. We timed and sequenced the simulated explosions so that they did significant damage to generating equipment. Subsequent planned damage to transmission facilities exacerbated the problem by making restoration of power more difficult.

Yet our findings were the opposite of what might have been expected. Scenario 1, which was less than half as destructive as scenario 2, was five times more costly to business and to the costs of maintaining the supply of electricity. Thus it had a long-lasting and substantial effect on the area. The United States as a whole feels the effects of scenario 1 more than it does those of scenario 2. A series of small disruptions provides a strong signal about the risk of doing business in a geographic area, and companies tend to relocate. With a single disruption, even a large one, economic uncertainty is short-lived and local, and the rest of the country tends to be isolated from the problem. This example gives an idea of what computer simulations can accomplish and the considerations they generate. To validate the simulations will, of course, require additional work. One clear advantage of such simulations is the ability to explore nonintuitive outcomes.

Eventually, models may be able to be combined to picture the critical national infrastructure in toto. An understanding of the fundamental feedback loops in modeling the national infrastructure is critical to analyzing and predicting the response of the infrastructure to unexpected perturbations. With further development, such computer models could analyze the impact of disruptive events in the infrastructure anywhere in the United States. They could identify critical nodes, assess the susceptibility of all the remaining systems in the infrastructure, and determine cost-effective and timely counter-measures.

These simulations can even determine winners and losers in an event and predict long-term consequences. For example, Sandia’s Teraflop computer system could allow such events to be analyzed as they happen to provide the information flow and technical support required for subsequent responses and for long-term considerations, such as remediation and prevention. Such capabilities could be the backbone of a national indications and warning center.

Changing marketplace

The U.S. infrastructure will continue to be reconfigured because of rapid advances in technology and policy. It will change with the numbers of competing providers and in response to an uncertain regulatory and legal framework. Yet surety is easiest to engineer in discrete well-understood systems. Indeed, the exceptional reliability and enviable security of the current infrastructure were achieved in the regulated systems-engineering environment of the past. The future environment of multiple providers, multiple technologies, distributed control, and easy access to hardware and software is fundamentally different. The solutions that will underlie the security of the future infrastructure will be shaped by this different environment and may be expected to differ considerably from the solutions of the past.

Some current policy discussions tend to treat infrastructure surety as an expected product of market forces. Where there are demands for high reliability or high surety, there will be suppliers-at a price. In this view, customers will have an unprecedented ability to protect themselves by buying services that can function as a back-up, demanding services that support individual needs for surety, and choosing proven performers as suppliers.

But the surety of the nation’s infrastructure is not guaranteed. We who have long been in national security question the ability of the marketplace to anticipate and address low-probability but high-consequence situations. We are moving from an era in which the surety of the infrastructure was generally predictable and controlled to one in which there are profound uncertainties.

Generally, the private sector cannot expect the market to provide the level of security and resilience that will be required to limit damage from a serious attack on, or escalating breakdown of, the infrastructure or one of its essential systems. The issue of private and public sector rights and responsibilities as they relate to the surety of the infrastructure remains an unresolved part of the national debate.

In the United States, government is both regulator and concerned customer.

Essential governmental functions include continuity, emergency services, and military operations, and they all depend on energy, communications, and computers. The government has a clear role in working with private industry to improve the surety of the U.S. infrastructure.

Because of its responsibilities in areas involving state-of-the-art technologies, such as those common to national defense and electrical power systems, DOE is a national leader in high-speed computation, computer modeling and simulation, and in the science of surety assessment and design. Among its capabilities:

  • Computer modeling of the complex interactions among infrastructure systems. Various models of individual elements of the infrastructure have been developed in and outside DOE, although there are currently no models of their interdependencies.
  • Risk assessment tools to protect physical assets. In the 1970s, technologies were developed to prevent the theft of nuclear materials transported between DOE facilities. Recently major improvements have been made in modeling and simulating physical protection systems.
  • Physical protection for plants and facilities in systems determined to be crucial to the operation of the nation. This involves technology for sensors, entry control, contraband detection, alarms, anti-failure mechanisms, and other devices to protect these systems. Some of the technology and the staff to develop new protection systems are available; the issue is what level of protection is adequate and who will bear the costs.
  • Architectural surety, which calls for enhanced safety, reliability, and security of buildings. Sandia is formulating a program that encompasses computational simulation of structural responses to bomb blasts for prediction and includes other elements, such as computer models for fragmentation of window glass, for monitoring instruments, and for stabilization of human health.
  • Data collection and surety. DOE already has technical capability to contribute, but what is needed now is to define and acquire the necessary data, develop standards and protocols for data sharing, design systems that protect proprietary data, and develop analytical tools to ensure that rapid and correct decisions will emerge from large volumes of data.

Next steps

The report by the President’s Commission on Critical Infrastructure Protection urged that a number of key actions be started now. In particular, these recommendations require prompt national consideration:

  • Establishment of a National Indications and Warning Center, with corrective follow-up coordinated by the National Infrastructure Protection Center.
  • Development of systems to model the national critical infrastructure, including consequence-based assessment, probabilistic risk assessment, modeling of interdependencies, and other similar tools to enhance our understanding of how the infrastructure operates. Close cooperation among government agencies and private-sector entities will be vital to the success of this work.
  • Development of options for the protection of key physical assets using the best available technology, such as architectural surety and protection of electronic information through encryption and authentication, as developed in such agencies as DOE, the Department of Defense, and the National Aeronautics and Space Administration.

Adequate funding will be needed for these programs. Increasing public awareness of the vulnerability of the systems that the critical national infrastructure comprises and of the related danger to national security and the general welfare of the nation will generate citizen support for increased funding. We believe these issues are critical to the future of the country and deserve to be brought to national attention.

An Electronic Pearl Harbor? Not Likely

Information warfare: The term conjures up a vision of unseen enemies, armed only with laptop personal computers connected to the global computer network, launching untraceable electronic attacks against the United States. Blackouts occur nationwide, the digital information that constitutes the national treasury is looted electronically, telephones stop ringing, and emergency services become unresponsive.

But is such an electronic Pearl Harbor possible? Although the media are full of scary-sounding stories about violated military Web sites and broken security on public and corporate networks, the menacing scenarios have remained just that-only scenarios. Information warfare may be, for many, the hip topic of the moment, but a factually solid knowledge of it remains elusive.

Hoaxes and myths about information warfare contaminate everything from official reports to newspaper stories.

There are a number of reasons why this is so. The private sector will not disclose much information about any potential vulnerabilities, even confidentially to the government. The Pentagon and other government agencies maintain that a problem exists but say that the information is too sensitive to be disclosed. Meanwhile, most of the people who know something about the subject are on the government payroll or in the business of selling computer security devices and in no position to serve as objective sources.

There may indeed be a problem. But the only basis on which we have to judge that at the moment is the sketchy information that the government has thus far provided. An examination of that evidence casts a great deal of doubt on the claims.

Computer-age ghost stories

Hoaxes and myths about info-war and computer security-the modern equivalent of ghost stories-contaminate everything from newspaper stories to official reports. Media accounts are so distorted or error-ridden that they are useless as a barometer of the problem. The result has been predictable: confusion over what is real and what is not.

A fairly common example of the type of misinformation that circulates on the topic is illustrated by an article published in the December 1996 issue of the FBI’s Law & Enforcement Bulletin. Entitled “Computer Crime: An Emerging Challenge for Law Enforcement,” the piece was written by academics from Michigan State and Wichita State Universities. Written as an introduction to computer crime and the psychology of hackers, the article presented a number of computer viruses as examples of digital vandals’ tools.

A virus called “Clinton,” wrote the authors, “is designed to infect programs, but . . . eradicates itself when it cannot decide which program to infect.” Both the authors and the FBI were embarrassed to be informed later that there was no such virus as “Clinton.” It was a joke, as were all the other examples of viruses cited in the article. They had all been originally published in an April Fool’s Day column of a computer magazine.

The FBI article was a condensed version of a longer scholarly paper presented by the authors at a meeting of the Academy of Criminal Justice Sciences in Las Vegas in 1996. Entitled “Trends and Experiences in Computer-Related Crime: Findings from a National Study,” the paper told of a government dragnet in which federal agents arrested a dangerously successful gang of hackers. “The hackers reportedly broke into a NASA computer responsible for controlling the Hubble telescope and are also known to have rerouted telephone calls from the White House to Marcel Marceau University, a miming institute,” wrote the authors of their findings. This anecdote, too, was a rather obvious April Fool’s joke that the authors had unwittingly taken seriously.

The FBI eventually recognized the errors in its journal and performed a half-hearted edit of the paper posted on its Web site. Nevertheless, the damage was done. The FBI magazine had already been sent to 55,000 law enforcement professionals, some of them decisionmakers and policy analysts. Because the article was written for those new to the subject, it is reasonable to assume that it was taken very seriously by those who read it.

Hoaxes about computer viruses have propagated much more successfully than the real things. The myths reach into every corner of modern computing society, and no one is immune. Even those we take to be authoritative on the subject can be unreliable. In 1997, members of a government commission headed by Sen. Daniel Moynihan (D-N.Y.), which included former directors of the Central Intelligence Agency and the National Reconnaissance Office, were surprised to find that a hoax had contaminated a chapter addressing computer security in their report on reducing government secrecy. “One company whose officials met with the Commission warned its employees against reading an e-mail entitled Penpal Greetings,” the Moynihan Commission report stated. “Although the message appeared to be a friendly letter, it contained a virus that could infect the hard drive and destroy all data present. The virus was self-replicating, which meant that once the message was read, it would automatically forward itself to any e-mail address stored in the recipient’s in-box.”

Penpal Greetings and dozens of other nonexistent variations on the same theme are believed to be real to such an extent that many computer security experts and antivirus software developers find themselves spending more time defusing the hoaxes than educating people about the real thing. In the case of Penpal, these are the facts: A computer virus is a very small program designed to spread by attaching itself to other bits of executable program code, which act as hosts for it. The host code can be office applications, utility programs, games, or special documents created by Microsoft Word that contain embedded computer instructions called macro commands-but not standard text electronic mail. For Penpal to be real would require all electronic mail to contain executable code automatically run when someone opens an e-mail message. Penpal could not have done what was claimed.

That said, there is still plenty of opportunity for malicious meddling, and because of it, thousands of destructive computer viruses have been written for the PC by bored teenagers, college students, computer science undergraduates, and disgruntled programmers during the past decade. It does not take a great leap of logic to realize that the popular myths such as Penpal have contributed to the sense, often mentioned by those writing about information warfare, that viruses can be used as weapons of mass destruction.

The widely publicized figure of 250,000 hacker intrusions on Pentagon computers in 1995 is fanciful.

Virus writers have been avidly thinking about this mythical capability for years, and many viruses have been written with malicious intent. None have shown any utility as weapons. Most attempts to make viruses for use as directed weapons fail for easily understandable reasons. First, it is almost impossible for even the most expert virus writer to anticipate the sheer complexity and heterogeneity of systems the virus will encounter. Second, simple human error is always present. It is an unpleasant fact of life that all software, no matter how well-behaved, harbors errors often unnoticed by its authors. Computer viruses are no exception. They usually contain errors, frequently such spectacular ones that they barely function at all.

Of course, it is still possible to posit a small team of dedicated professionals employed by a military organization that could achieve far more success than some alienated teen hackers. But assembling such a team would not be easy. Even though it’s not that difficult for those with basic programming skills to write malicious software, writing a really sophisticated computer virus requires some intimate knowledge of the operating system it is written to work within and the hardware it will be expected to encounter. Those facts narrow the field of potential professional virus designers considerably.

Next, our virus-writing team leader would have to come to grips with the reality, if he’s working in the free world, that the pay for productive work in the private sector is a lot more attractive than anything he can offer. Motivation-in terms of remuneration, professional satisfaction, and the recognition that one is actually making something other people can use-would be a big problem for any virus-writing effort attempting to operate in a professional or military setting. Another factor our virus developer would need to consider is that there are no schools turning out information technology professionals who have been trained in virus writing. It’s not a course one can take at an engineering school. Everyone must learn this dubious art from scratch.

And computer viruses come with a feature that is anathema to a military mind. In an era of smart bombs, computer viruses are hardly precision-guided munitions. Those that spread do so unpredictably and are as likely to infect the computers of friends and allies as enemies. With militaries around the world using commercial off-the-shelf technology, there simply is no haven safe from potential blow-back by one’s creation. What can infect your enemy can infect you. In addition, any military commander envisioning the use of computer viruses would have to plan for a reaction by the international antivirus industry, which is well positioned after years of development to provide an antidote to any emerging computer virus.

To be successful, computer viruses must be able to spread unnoticeably. Those that do have payloads that go off with a bang or cause poor performance on an infected system get noticed and immediately eliminated. Our virus-writing pros would have to spend a lot of time on intelligence, gaining intimate knowledge of the targeted systems and the ways in which they are used, so their viruses could be written to be maximally compatible. To get that kind of information, the team would need an insider or insiders. With insiders, computer viruses become irrelevant. They’re too much work for too little potential gain. In such a situation, it becomes far easier and far more final to have the inside agent use a hammer on the network server at an inopportune moment.

But what if, with all the caveats attached, computer viruses were still deployed as weapons in a future war? The answer might be, “So what?” Computer viruses are already blamed, wrongly, for many of the mysterious software conflicts, inexplicable system crashes, and losses of data and operability that make up the general background noise of modern personal computing. In such a world, if someone launched a few extra computer viruses into the mix, it’s quite likely that no one would notice.

Hackers as nuisances

What about the direct effects of system-hacking intruders? To examine this issue, it is worth examining in detail one series of intrusions by two young British men at the Air Force’s Rome Labs in Rome, New York, in 1994. This break-in became the centerpiece of a U.S. General Accounting Office (GAO) report on network intrusions at the Department of Defense (DOD) and was much discussed during congressional hearings on hacker break-ins the same year. The ramifications of the Rome break-ins are still being felt in 1998.

One of the men, Richard Pryce, was originally noticed on Rome computers on March 28, 1994, when personnel discovered a program called a “sniffer” he had placed on one of the Air Force systems to capture passwords and user log-ins to the network. A team of computer scientists was promptly sent to Rome to investigate and trace those responsible. They soon found that Pryce had a partner named Matthew Bevan.

Since the monitoring was of limited value in determining the whereabouts of Pryce and Bevan, investigators resorted to questioning informants they found on the Net. They sought hacker groupies, usually other young men wishing to be associated with those more skilled at hacking and even more eager to brag about their associations. Gossip from one of these Net stoolies revealed that Pryce was a 16-year-old hacker from Britain who ran a home-based bulletin board system; its telephone number was given to the Air Force. Air Force investigators subsequently contacted New Scotland Yard, which found out where Pryce lived.

By mid-April 1994, Air Force investigators had agreed that the intruders would be allowed to continue so their comings and goings could be used as a learning experience. On April 14, Bevan logged on to the Goddard Space Center in Greenbelt, Maryland, from a system in Latvia and copied data from it to the Baltic country. According to one Air Force report, the worst was assumed: Someone in an eastern European country was making a grab for sensitive information. The connection was broken. As it turned out, the Latvian computer was just another system that the British hackers were using as a stepping stone.

On May 12, not long after Pryce had penetrated a system in South Korea and copied material off a facility called the Korean Atomic Research Institute to an Air Force computer in Rome, British authorities finally arrested him. Pryce admitted to the Air Force break-ins as well as others. He was charged with 12 separate offenses under the British Computer Misuse Act. Eventually he pleaded guilty to minor charges in connection with the break-ins and was fined 1,200 English pounds. Bevan was arrested in 1996 after information on him was recovered from Pryce’s computer. In late 1997, he walked out of a south London Crown Court when English prosecutors conceded it wasn’t worth trying him on the basis of evidence submitted by the Air Force. He was deemed no threat to national computer security.

Pryce and Bevan had accomplished very little on their joyride through the Internet. Although they had made it into congressional hearings and been the object of much worried editorializing in the mainstream press, they had nothing to show for it except legal bills, some fines, and a reputation for shady behavior. Like the subculture of virus writers, they were little more than time-wasting petty nuisances.

But could a team of dedicated computer saboteurs accomplish more? Could such a team plant misinformation or contaminate a logistical database so that operations dependent on information supplied by the system would be adversely influenced? Maybe, maybe not. Again, as in the case of the writing of malicious software for a targeted computer system, a limiting factor not often discussed is knowledge about the system they are attacking. With little or no inside knowledge, the answer is no. The saboteurs would find themselves in the position of Pryce and Bevan, joyriding through a system they know little about.

Altering a database or issuing reports and commands that would withstand harsh scrutiny of an invaded system’s users without raising eyebrows requires intelligence that can only be supplied by an insider. An inside agent nullifies the need for a remote computer saboteur or information warrior. He can disrupt the system himself.

The implications of the Pryce/Bevan experience, however, were not lost on Air Force computer scientists. What was valuable about the Rome intrusions is that they forced those sent to stop the hackers into dealing with technical issues very quickly. As a result, Air Force Information Warfare Center computer scientists were able to develop a complete set of software tools to handle such intrusions. And although little of this was discussed in the media or in congressional meetings, the software and techniques developed gave the Air Force the capability of conducting real-time perimeter defense on its Internet sites should it choose to do so.

The computer scientists involved eventually left the military for the private sector and took their software, now dubbed NetRanger, with them. As a company called WheelGroup, bought earlier this year by Cisco Systems, they sell NetRanger and Net security services to DOD clients.

Inflated numbers

A less beneficial product of the incidents at Rome Labs was the circulation of a figure that has been used as an indicator of computer break-ins at DOD since 1996. The figure, furnished by the Defense Information Systems Agency (DISA) and published in the GAO report on the Rome Labs case, quoted a figure of 250,000 hacker intrusions into DOD computers in 1995. Taken at face value, this would seem to be a very alarming figure, suggesting that Pentagon computers are under almost continuous assault by malefactors. As such, it has shown up literally hundreds of times since then in magazines, newspapers, and reports.

But the figure is not and has never been a real number. It is a guess, based on a much smaller number of recorded intrusions in 1995. And the smaller number is usually never mentioned when the alarming figure is cited. At a recent Pentagon press conference, DOD spokesman Kenneth H. Bacon acknowledged that the DISA figure was an estimate and that DISA received reports of about 500 actual incidents in 1995. Because DISA believed that only 0.2 percent of all intrusions are reported, it multiplied its figure by 500 and came up with 250,000.

Kevin Ziese, the computer scientist who led the Rome Labs investigation, called the figure bogus in a January 1998 interview with Time Inc’s. Netly News. Ziese said that the original DISA figure was inflated by instances of legitimate user screwups and unexplained but harmless probes sent to DOD computers by use of an Internet command known as “finger,” a check used by some Net users to return the name and occasionally additional minor information that can sometimes include a work address and telephone number of a specific user at another Internet address. But since 1995, the figure has been continually misrepresented as a solid metric of intrusions on U.S. military networks and has been very successful in selling the point that the nation’s computers are vulnerable to attack.

In late February 1998, Deputy Secretary of Defense John Hamre made news when he announced that DOD appeared to be under a cyber attack. Although a great deal of publicity was generated by the announcement, when the dust cleared the intrusions were no more serious than the Rome Labs break-ins in 1994. Once again it was two teenagers, this time from northern California, who had been successful at a handful of nuisance penetrations. In the period between when the media focused on the affair and the FBI began its investigation, the teens strutted and bragged for Anti-Online, an Internet-based hacker fanzine, exaggerating their abilities for journalists.

Not everyone was impressed. Ziese dismissed the hackers as “ankle-biters” in the Wall Street Journal. Another computer security analyst, quoted in the same article, called them the virtual equivalent of a “kid walking into the Pentagon cafeteria.”

Why, then, had there been such an uproar? Part of the explanation lies in DOD’s apparently short institutional memory. Attempts to interview Hamre or a DOD subordinate in June 1998 to discuss and contrast the differences between the Rome incidents in 1994 and the more recent intrusions were turned down. Why? Astonishingly, it was simply because no current top DOD official currently dealing with the issue had been serving in that same position in 1994, according to a Pentagon spokesperson.

Info-war myths

Another example of the jump from alarming scenario to done deal was presented in the National Security Agency (NSA) exercise known as “Eligible Receiver.” As a war game designed to simulate vulnerability to electronic attack, one phase of it posited that an Internet message claiming that the 911 system had failed had been mailed to as many people as possible. The NSA information warriors took for granted that everyone reading it would immediately panic and call 911, causing a nationwide overload and system crash. It’s a naïve assumption that ignores a number of rather obvious realities, each capable of derailing it. First, a true nationwide problem with the 911 system would be more likely to be reported on TV than the on Internet, which penetrates far fewer households. Second, many Internet users, already familiar with an assortment of Internet hoaxes and mean-spirited practical jokes, would not be fooled and would take their own steps to debunk it. Finally, a significant portion of U.S. inner-city populations reliant on 911 service are not hooked to the Internet and cannot be reached by e-mail spoofs. Nevertheless, “It can probably be done, this sort of an attack, by a handful of folks working together,” claimed one NSA representative in the Atlanta Constitution. As far as info-war scenarios went, it was bogus.

However, with regard to other specific methods employed in “Eligible Receiver,” the Pentagon has remained vague. In a speech in Aspen, Colorado, in late July 1998, the Pentagon’s Hamre said of “Eligible Receiver:” “A year ago, concerned for this, the department undertook the first systematic exercise to determine the nation’s vulnerability and the department’s vulnerability to cyber war. And it was startling, frankly. We got about 30, 35 folks who became the attackers, the red team . . . We didn’t really let them take down the power system in the country, but we made them prove that they knew how to do it.”

The time and effort spent dreaming up scary info-war scenarios would be better spent bolstering basic computer security.

The Pentagon has consistently refused to provide substantive proof, other than its say-so, that such a feat is possible, claiming that it must protect sensitive information. The Pentagon’s stance is in stark contrast to the wide-open discussions of computer security vulnerabilities that reign on the Internet. On the Net, even the most obscure flaws in computer operating system software are immediately thrust into the public domain, where they are debated, tested, almost instantly distributed from hacker Web sites, and exposed to sophisticated academic scrutiny. Until DOD becomes more open, claims such as those presented by “Eligible Receiver” must be treated with a high degree of skepticism.

In the same vein, computer viruses and software used by hackers are not weapons of mass destruction. It is overreaching for the Pentagon to classify such things with nuclear weapons and nerve gas. They can’t reduce cities to cinders. Insisting on classifying them as such suggests that the countless American teenagers who offer viruses and hacker tools on the Web are terrorists on a par with Hezbollah, a ludicrous assumption.

Seeking objectivity

Another reason to be skeptical of the warnings about information warfare is that those who are most alarmed are often the people who will benefit from government spending to combat the threat. A primary author of a January 1997 Defense Science Board report on information warfare, which recommended an immediate $580-million investment in private sector R&D for hardware and software to implement computer security, was Duane Andrews, executive vice president of SAIC, a computer security vendor and supplier of information warfare consulting services.

Assessments of the threats to the nation’s computer security should not be furnished by the same firms and vendors who supply hardware, software, and consulting services to counter the “threat” to the government and the military. Instead, a true independent group should be set up to provide such assessments and evaluate the claims of computer security software and hardware vendors selling to the government and corporate America. The group must not be staffed by those who have financial ties to computer security firms. The staff must be compensated adequately so that it is not cherry-picked by the computer security industry. It must not be a secret group and its assessments, evaluations, and war game results should not be classified.

Although there have been steps taken in this direction by the National Institute of Standards and Technology, a handful of other military agencies, and some independent academic groups, they are still not enough. The NSA also performs such an evaluative function, but its mandate for secrecy and classification too often means that its findings are inaccessible to those who need them or, even worse, useless because NSA members are not free to discuss them in detail.

Bolstering computer security

The time and effort expended on dreaming up potentially catastrophic information warfare scenarios could be better spent implementing consistent and widespread policies and practices in basic computer security. Although computer security is the problem of everyone who works with computers, it is still practiced half-heartedly throughout much of the military, the government, and corporate America. If organizations don’t intend to be serious about security, they simply should not be hooking their computers to the Internet. DOD in particular would be better served if it stopped wasting time trying to develop offensive info-war capabilities and put more effort into basic computer security practices.

It is far from proven that the country is at the mercy of possible devastating computerized attacks. On the other hand, even the small number of examples of malicious behavior examined here demonstrate that computer security issues in our increasingly technological world will be of primary concern well into the foreseeable future. These two statements are not mutually exclusive, and policymakers must be skeptical of the Chicken Littles, the unsupported claim pushing a product, and the hoaxes and electronic ghost stories of our time.

Fall 1998 Update

Missile defense

In “Star Wars Redux” (Issues, Winter 1994-95), I discussed U.S. plans to develop and deploy highly capable defenses against theater (or tactical) ballistic missiles with ranges up to 3,500 kilometers. I argued that large-scale deployment of theater missile defense (TMD) systems could eventually undermine the confidence that the United States and Russia have in the effectiveness of their strategic nuclear retaliatory forces. I also argued that in the mid-term, TMD deployments could interfere with negotiations to further reduce nuclear arsenals.

In September 1997, after four years of negotiations in Geneva, the United States and Russia established a “demarcation” line between TMD systems, which are not limited by the 1972 ABM Treaty, and national missile defense (NMD) systems, which are restricted by the treaty to 100 interceptors for each side. Although Russia sought explicit constraints on the capabilities of TMD systems, the two countries did not set any direct limitations on TMD interceptor performance [the limits are only on the range (3,500 kilometers) and speed (5 kilometers per second) of target vehicles] or impose any other restrictions on TMD development or deployment. The sides did agree, however, to ban space-based interceptor missiles and space-based components based on other physical principles (such as lasers) that are capable of substituting for interceptor missiles. The United States and Russia left to each side the responsibility for determining whether its own higher-velocity TMD systems (with interceptor speeds over three kilometers per second) actually comply with the ABM Treaty. As more sophisticated TMD components are developed, this approach has the potential to generate serious disagreements over critical TMD issues, including air-based laser weapons and space-based tracking and battle-management sensors.

As thorny as the TMD issue has been during the past four years, it apparently was only the prelude to a renewed, more fundamental debate in Congress over whether to deploy an NMD. The Republican-controlled Congress supports an NMD as well as unfettered TMD deployments. Meanwhile, the Clinton administration has found itself squeezed between protecting the ABM Treaty and preserving the nuclear arms reduction process with Moscow on the one hand and managing the constant pressure from a conservative Congress for a firm commitment to missile defenses on the other.

Moscow has made it abundantly clear that it considers the ABM Treaty to be the key to continuing strategic nuclear arms reductions, that it opposes any large-scale NMD deployment, and that it considers the question of TMD deployments far from settled. Congress, on the other hand, believes that the United States should make a commitment now to an NMD; renegotiate or, if necessary, scrap the ABM Treaty to permit a large-scale NMD deployment; and refuse in any way to restrict TMD performance, deployment, or architecture. The future of missile defense may reach a crucial milestone this fall when Congress takes up a bill, already introduced in the House, declaring that “it is the policy of the United States to deploy a national missile defense.”

The Clinton administration has tried to accommodate these conflicting pressures by adopting a so-called “3+3” policy for NMD. This policy calls for continued R&D on NMD until 2000, at which time, if the threat warrants, a deployment decision could be made with the expectation that an NMD system would begin operation three years later. If, however, the threat assessment in 2000 does not justify a deployment decision, then R&D would continue, along with the capability to deploy within three years after a decision is made.

On TMD, the administration adamantly maintains that it has not negotiated a “dumbing down” of U.S. capabilities. Nonetheless, sensing that Senate opposition to limits on TMD can be overcome only by arguing that some understanding on TMD testing is the price for Russian agreement to eliminate multiple-warhead intercontinental ballistic missiles and significantly reduce its strategic nuclear forces, the administration has linked its submission to Congress of the TMD agreements to Russian ratification of the START II Treaty. If, however, the Russian Duma fails to ratify the START II agreements later this fall after President Clinton’s September summit in Moscow, the entire nuclear arms reduction process could collapse under the pressure from Congress for extensive and costly TMD and NMD deployments.

Jack Mendelsohn


International Scientific Cooperation

In August 1991, we traveled to Mexico to meet with policymakers and scientists about the establishment of a United States-Mexico science foundation devoted to supporting joint research on problems of mutual interest. We encountered enthusiasm and vision at every level, including an informal commitment by the Minister of Finance to match any U.S. contribution up to $20 million. At about this time, our article “Fiscal Alchemy: Transforming Debt into Research” (Issues, Fall 1991) sought to highlight three issues: 1) the pressing need for scientific partnerships between the United States and industrializing nations, 2) the mechanism of bilateral or multilateral foundations for funding such partnerships, and 3) the device of debt swaps for allowing debtor nations with limited foreign currency reserves to act as full partners in joint research ventures. We returned from our visit to Mexico flush with optimism about moving forward on all three fronts.

Results, overall, have been disappointing. We had hoped that the debt-for-science concept would be adopted by philanthropic organizations and universities as a way to leverage the most bang for the research buck. This has not taken place. The complexity of negotiating debt swaps and the changing dynamics of the international economy may be inhibiting factors. But much more significant, in our view, is a general unwillingness in this nation to pursue substantive international scientific cooperation with industrializing and developing nations.

Although the National Science Foundation and other agencies do fund U.S. scientists conducting research in the industrializing and developing world, this work does not support broader partnerships aimed at shared goals. Such partnerships can foster the local technological capacities that underlie economic growth and environmental stewardship; we also view them as key to successfully addressing a range of mutual problems, including transborder pollution, emerging diseases, and global climate change. Yet there is a conspicuous lack of attention to this approach at all levels of the administration; most important, the State Department continues to view scientific cooperation as a question of nothing more than diplomatic process.

Incredibly, through 1995 (the latest year for which data are available) the United States has negotiated more than 800 bilateral and multilateral science and technology agreements (up from 668 in 1991), even though virtually none of these are backed by funding commitments. Nor is there any coordination among agencies regarding goals, implementation, redundancy, or follow-up. A report by the RAND Corporation, “International Cooperation in Research and Development,” found little correlation between international agreements and actual research projects. Moreover, although there are few indications that these agreements have led to significant scientific partnerships with industrializing and developing nations, there is plenty of evidence that they support a healthy bureaucratic infrastructure, including, for example, international science and technology offices at the Office of Science and Technology Policy, Department of State, Department of Commerce, and all the technological agencies. We cannot help but think that a portion of the funds devoted to negotiating new agreements and maintaining existing ones might be better spent on cooperative science.

One bright spot in this picture has been the United States-Mexico Foundation for Science, which is off to a promising start despite restricted financial resources. Although Congress approved an appropriation of up to $20 million in 1991, to date the administration has been willing to contribute only $3.8 million to the foundation. Mexico has matched this amount and remains willing to match significantly higher U.S. contributions, which we hope will be forthcoming in the next year. Some additional funds have come from philanthropic organizations. At this early stage, the foundation is focusing especially on issues of water and health in the U.S.-Mexico border region, as well as joint technological workshops and graduate student fellowships. (For more information, see the foundation’s Web site at www.fumec.org.mx.) We remain convinced that the foundation is an important prototype for scientific partnership in an increasingly interconnected and interdependent community of nations.

George E. Brown, Jr.

Daniel Sarewitz

Michael Quear

Environmental Policy in the Age of Genetics

In April 1965, a young researcher at Fairchild Semiconductor named Gordon Moore published an article in an obscure industry magazine entitled “Cramming More Components Onto Integrated Circuits.” He predicted that the power of the silicon chip would double almost annually with a proportionate decrease in cost. Moore went on to become one of the founders of Intel, and his prediction, now known as Moore’s Law, has become an accepted industry truism. Recently, Monsanto proposed a similar law for the area of biotechnology, which states that the amount of genetic information used in practical applications will double every year or two.

Sitting at the intersection of these two laws is a fascinating device known as the gene or DNA chip, a fusion of biology and semiconductor manufacturing technology. Like their microprocessor cousins, gene chips contain a dense grid or array placed on silicon using techniques such as photolithography. In the case of gene chips, however, this array holds DNA probes that form one half of the DNA double helix and can recognize and bind DNA from samples taken from people or organisms being tested. After binding, a laser activates fluorescent dyes attached to the DNA, and the patterns of fluorescence are analyzed to reveal mutations of interest or gene activity. All indicators are that the gene chips are obeying Moore’s Law. Three years ago, the first gene chips held 20,000 DNA probes, last year the chips had 65,000, and chips with over 400,000 probes have recently been introduced. The chips are attracting intense commercial interest. In June 1998, Motorola, Packard Instrument, and the U.S. government’s Argonne National Laboratory signed a multiyear agreement to develop the technologies required to mass-produce gene chips.

We are quickly wandering into an area with few legal protections and even fewer legal precedents in case law.

So what is new about this technology? Experimental chips are already at least 25 times faster than existing gene sequencing methods at decoding information. The chip decodes genetic information a paragraph or page at a time, rather than letter by letter, sequencing an entire genome in minutes and locating missing pieces or structural changes. If we can read a person’s genetic story that fast, we can finish the book in a reasonable amount of time and understand more complex plots and subplots. Existing techniques have been valuable in identifying a small number of changes in the DNA chain commonly known as single nucleotide polymorphisms, which may result in diseases such as sickle cell anemia. However, these approaches have proved too slow and expensive to provide information on polygenic diseases, in which many genes may contribute to the emergence of disease or increased susceptibility to stressors. The gene chips are a key in recognizing this multigene “fingerprint,” which may underlie diseases with complex etiologies involving the interaction of multiple genes as well as environmental factors.

Much environmental regulation protects human health by a very indirect route. For example, a very high dose of a chemical might be found to cause cancer in rats or other laboratory animals. Even though the mechanism by which the cancer is formed may be poorly understood, an estimate is made that a certain amount of that chemical would be harmful to humans. Estimates are then made about what concentration of that chemical in the environment might result in a high level in humans and what level of discharge of that chemical from an industrial plant or other source might result in the dangerously high concentration in the environment. Finally, the facility is told that it must limit its release of that chemical to a specific level, and, in many cases, the technologies to accomplish these reductions are prescribed. This long series of assumptions, calculations, and extrapolations makes the regulatory process slow, inexact, and contentious-a breeding ground for litigation, scientific disputes, and public confusion.

Gene chip technology could turn that system on its head. Biomarkers (substances produced by the body in response to chemicals) have already made it possible to measure the level of a specific chemical such as lead, benzene, or vinyl chloride in an individual’s urine, blood, or tissue. Gene chips will make it possible to observe the actual loss of genetic function and predict susceptibility to change induced by a chemical. As the cost of the technology decreases, it will be possible to do this for many, many more people; ultimately, it might be cost-effective to screen large populations. The focus of environmental management will shift from monitoring the external environment to looking at how external exposures translate into diseases at a molecular level. This could radically change the way we approach environmental risk assessment and management, especially if diagnostic information from the gene chips is used in combination with emerging techniques in the field of molecular medicine. This could open up whole new avenues for prevention and early intervention and allow us to custom-design individual strategies to reduce or avoid a person’s exposure to environmental threats at a molecular level. Some simple intervention measures already exist. For instance, potassium iodide can block a type of radiation that causes thyroid cancer, and the Nuclear Regulatory Commission has recently approved its distribution to residents living in close proximity to nuclear power plants. However, unlocking the Holy Grail of the human genome moves the intervention possibilities to a very different level. New techniques are now being developed that block the ability of environmental toxins to bind to proteins and cause damage, speed up the rate at which naturally occurring enzymes detoxify substances, or enhance the ability of the human body to actually repair environmentally damaged DNA. We move from the end-of-the-pipe world of the 1970s to the inside-the-gene world of the next millennium.

Potential misuse

This potential comes packaged with significant dangers. Francis Collins, director of the Human Genome Project at the National Institutes of Health, recently remarked that the ability to identify individual susceptibility to illness “will empower people to take advantage of preventive strategies, but it could also be a nightmare of discriminatory information that could be used against people.”

Without the proper safeguards in place, possibilities will abound for coercive monitoring, job discrimination, and violations of privacy. From a policy perspective, the danger exists that we could either overreact to these potential problems or react too late. Some of the more obvious issues are being addressed by a part of the Human Genome Project that looks at ethical, legal, and social implications of our expanding knowledge of genetics. However, the privacy and civil liberties debate has tended to mask more subtle, but potentially profound, effects on fields other than medicine. The use of gene chips could forever alter the rules of the game that have dominated environmental protection for 25 years. Here are a number of speculative concerns for those responsible for environmental policy.

First, as such testing and intervention capacity becomes cheaper, more accessible, and more widespread, it puts more power in the hands of the public and the medical profession and takes it away from the high priesthood of toxicologists and risk assessors in our regulatory institutions. This is not necessarily bad, because polls have shown that the public has a greater trust in the medical profession than in the environmental regulatory community. However, it is not at all clear that the medical community wants, or is trained, to take on this role. Research done by Neil Holtzman at the Johns Hopkins School of Medicine has shown that many physicians have a poor understanding of the probabilistic data generated by genetic testing, and other studies have indicated that many physicians are uncomfortable about sharing such information with patients. The few genetic tests already available for diseases such as cystic fibrosis have taxed our capability to provide the counseling needed to deal with patient fears and the new dilemmas of choice. Added to this picture is the potential involvement of the managed care and insurance industries in defining the testing, treatments, costs, and ultimate outcomes. Genetic information could be used by insurance companies to deny coverage to healthy people who have been identified as being susceptible to environmentally related diseases. Knowledge is power, and if the gene chips provide that knowledge to a new set of actors, environmental decisionmaking could be radically altered in ways that provide immense opportunity but that could also result in institutional paralysis, mass confusion, and public distrust.

Second, in a world where environmental policy is increasingly driven and shaped by constituencies, the new technologies offer a stepping stone toward the “individualization” of environmental protection and are a potential time bomb in our litigious culture. The rise of toxic tort litigation over the past 25 years has closely paralleled our scientific ability to show proximate causation; that is, to connect a specific act with a specific effect. Until now, environmental litigation has fallen largely into two classes: class action suits filed by large numbers of individuals exposed to proven carcinogens such as asbestos, or suits brought by people in cases where exposures to environmental agents have lead to identifiable clusters of diseases such as leukemia. The possibility that individuals could acquire enough genetic evidence to support lawsuits for environmental exposures raises some truly frightening prospects. Though workers’ compensation laws generally bar lawsuits for damages resulting from injuries or illnesses in the workplace, loopholes exist, especially if employers learn of exposures and/or susceptibilities through genetic testing and do not notify workers. The expanded use of gene chips for medical surveillance in the workplace increases the possibilities for discrimination across the board. Finally, the testing of large populations with this technology may increase the likelihood of legal disputes based on emerging evidence of gender-, ethnicity-, or race-based variances in susceptibility to environmentally linked diseases. We are quickly wandering into an area with few legal protections and even fewer legal precedents in case law.

Third, the increased knowledge of human genetic variation and vulnerability will likely increase what Edward Tenner of Princeton University has described as the “burden of vigilance”-a need to continuously monitor at-risk individuals and environmental threats at levels far exceeding the capacities of our existing data-gathering systems. This could result in a demand for microlevel monitors for household or personal use, better labeling of products, and far greater scrutiny of the more than 2,000 chemicals that are registered annually by the Environmental Protection Agency (EPA) and used in commerce (we now have adequate human toxicity data on less than 40 percent of these). Much of this new data will not provide unequivocal answers but will require the development of new interpretive expertise and mechanisms to deal with problems such as false positives, which could lead to inaccurate diagnoses and intervention errors.

Finally, though the costs of the chips can be expected to drop, there may be a period of time when they would be available only to the wealthy. This period of time could be much longer if the health care system refused to underwrite their use, making early detection and associated intervention options unavailable to the uninsured and low-income portions of the population who might have high exposures to environmental toxins. This situation would also be found in less developed countries with dirty industries and poor environmental laws, where populations may have few options to monitor exposure and ultimately escape disease. Who will decide who benefits and who does not?

Keeping pace

This is clearly a situation where rapid scientific and technological advance could outrun our institutional capabilities and test our moral fabric. As we all know, social innovation and moral development do not obey Moore’s Law. The most important question is not whether such technologies will be developed and applied (they will) but whether we will be ready as a society to deal with the associated ethical, institutional, and legal implications. Steve Fodor of Affymetrix, one of the leading manufacturers of gene chips, recently remarked that, “Ninety-nine percent of the people don’t have an inkling about how fast this revolution is coming.” Although there has been a recent flurry of attempts by a wide variety of think tanks and policy analysts to “reinvent” the regulatory system, there is no indication that the environmental policy community is paying attention to this development.

This brings us to the final and most important lesson of the gene chip. It was only 35 years ago that Herman Kahn and his colleagues at the RAND Corporation confronted the policymaking community with the possibilities and probable outcomes of another of our large scientific and technological enterprises: The Manhattan Project. By outlining the potential outcomes of a war fought with thermonuclear weapons, they taught us two important things. First, science, and especially big science like the Human Genome Project, has far-reaching effects that are often unintended, unanticipated, and unaddressed by the people directly involved in the scientific enterprise. Second, and probably more important, is that better foresight is possible and can lead to better public policies and decisionmaking. Though the pace of technological change has accelerated, we have forgotten Kahn’s lessons. The elimination of the Office of Technology Assessment in 1996 helped ensure that we will continue to drive through the rapidly changing technological landscape with the headlights off. In times like these, we need more foresight, not less. Embedded in the intriguing question of how the gene chip might affect environmental policy is the larger question of who will ultimately protect us from ourselves, our creations, and ultimately, our hubris. We are placing ourselves in a position described so well over 100 years ago by Ralph Waldo Emerson when he wrote that, “We learn about geology the day after the earthquake.”

Toward a Global Science

In the early 1990s, the Carnegie Commission on Science, Technology, and Government published a series of reports emphasizing the need for a greatly increased role for science and scientists in international affairs. In a world full of conflicting cultural values and competing needs, scientists everywhere share a powerful common culture that respects honesty, generosity, and ideas independently of their source, while rewarding merit. A major aim of the National Academy of Sciences (NAS) is to strengthen the ties between scientists and their institutions around the world. Our goal is to create a scientific network that becomes a central element in the interactions between nations, increasing the level of rationality in international discourse while enhancing the influence of scientists everywhere in the decisionmaking processes of their own governments.

I am pleased to announce that we recently received a letter from the Department of State in which Secretary Madeleine Albright requests that we help the State Department determine “the contributions that science, technology, and health can make to foreign policy, and how the department might better carry out its responsibilities to that end.” I want to begin that effort by suggesting four principles that should guide our activities. Science can be a powerful force for promoting democracy. The vitality of a nation’s science and technology enterprise is increasingly becoming the main driver of economic advancement around the world. Success requires a free exchange of ideas as well as universal access to the world’s great store of knowledge. Historically, the growth of science has helped to spread democracy, and this is even more true today. Many governments around the world exert power over their citizens through the control of information. But restricting access to knowledge has proven to be self-destructive to the economic vitality of nations in the modern world. The reason is a simple one: The world is too complex for a few leaders to make wise decisions about all aspects of public policy.

New scientific and technological advances are essential to accommodate the world’s rapidly expanding population. The rapid rise in the human population in the second half of this century has led to a crowded world, one that will require all of the ingenuity available from science and technology to maintain stability in the face of increasing demands on natural resources. Thus, for example, a potential disaster is looming in Africa. Traditionally, farmers had enough land available to practice shifting cultivation, in which fields were left fallow for 10 or so years between cycles of plantings. But now, because of Africa’s dramatically increasing population, there is not enough land to allow these practices. The result is a continuing process of soil degradation that reduces yields and will make it nearly impossible for Africa to feed itself. The best estimates for the year 2010 predict that fully one-third of the people in Sub-Saharan Africa will have great difficulty obtaining food.

It has been argued that the ethnic conflicts that led to the massacres in Rwanda were in large part triggered by conflicts over limited food resources. We can expect more such conflicts in the future, unless something dramatic is done now. How might the tremendous scientific resources of the developed world be brought to bear on increasing the African food supply? At present, I see large numbers of talented, idealistic young people in our universities who would welcome the challenge of working on such urgent scientific problems. But the many opportunities to use modern science on behalf of the developing world remain invisible to most scientists on our university campuses. As a result, a great potential resource for improving the human condition is being ignored.

Electronic communication networks make possible a new kind of world science. In looking to the future, it is important to recognize that we are only at the very beginning of the communications revolution. For example, we are promised by several commercial partnerships that by the year 2002 good connectivity to the World Wide Web will become available everywhere in the world at a modest cost through satellite communications. Moreover, at least some of these partnerships have promised to provide heavily subsidized connections for the developing world.

Developing countries have traditionally had very poor access to the world’s store of scientific knowledge. With the electronic publication of scientific journals, we now have the potential to eliminate this lack of access. NAS has decided to lead the way with our flagship journal, the Proceedings of the National Academy of Sciences, making it free on the Web for developing nations. We also are hoping to spread this practice widely among other scientific and technical journals, since there is almost no cost involved in providing such free electronic access.

The next problem that scientists in developing countries will face is that of finding the information they need in the mass of published literature. In 1997, the U.S. government set an important precedent. It announced that the National Library of Medicine’s indexing of the complete biomedical literature would be made electronically available for free around the world through a Web site called PubMed. The director of the PubMed effort, David Lipman, is presently investigating what can be done to produce a similar site for agricultural and environmental literature.

The communications revolution also is driving a great transformation in education. Already, the Web is being used as a direct teaching tool, providing virtual classrooms of interacting students and faculty. This tool allows a course taught at one site to be taken by students anywhere in the world. Such technologies present an enormous opportunity to spread the ability to use scientific and technical knowledge everywhere; an ability that will be absolutely essential if we are to head for a more rational and sustainable world in the 21st century. Science academies can be a strong force for wise policymaking. In preparing for the future, we need to remember that we are only a tiny part of the world’s people. In 1998, seven out of every eight children born will be growing up in a developing nation. As the Carnegie Commission emphasized, we need more effective mechanisms for providing scientific advice internationally, particularly in view of the overwhelming needs of this huge population.

In 1993, the scientific academies of the world met for the first time in New Delhi; the purpose was to address world population issues. The report developed by this group of 60 academies was presented a year later at the 1994 UN Conference at Cairo. Its success has now led to a more formal collaboration among academies, known as the InterAcademy Panel (IAP). A common Web site for the entire group will soon be online, and the IAP is working toward a major conference in Tokyo in May of 2000 that will focus on the challenges for science and technology in making the transition to a more sustainable world.

Inspired by a successful joint study with the Mexican academy that produced a report on Mexico City’s water supply, we began a study in 1996 entitled “Sustaining Freshwater Resources in the Middle East” as a collaboration among NAS, the Royal Scientific Society of Jordan, the Israel Academy of Sciences and Humanities, and the Palestine Health Council. The final version of this report is now in review, and we expect it to be released this summer. I would also like to highlight a new energy study that we initiated this year with China. Here, four academies-two from the United States and two from China-are collaborating to produce a major forward-looking study of the energy options for our two countries. Recently, the Indian Science and Engineering Academies have indicated an interest in carrying out a similar energy study with us. I believe that these Indian and Chinese collaborations are likely to lead us all toward a wiser use of global energy resources.

My dream for the IAP is to have it become recognized as a major provider of international advice for developing nations, the World Bank, and the many similar agencies that require expert scientific and technical assistance. Through an IAP mechanism, any country or organization seeking advice could immediately call on a small group of academies of its choosing to provide it with politically balanced input coupled with the appropriate scientific and technical expertise.

The road from here

In the coming year, NAS will attempt to prepare an international science road map to help our State Department. My discussions with the leaders of academies in developing countries convince me that they will need to develop their own road maps in the form of national science policies. To quote José Goldemberg, a distinguished scientific leader from Brazil: “What my scientist colleagues and national leaders alike failed to understand was that development does not necessarily coincide with the possession of nuclear weapons or the capability to launch satellites. Rather, it requires modern agriculture, industrial systems, and education . . . This scenario means that we in developing countries should not expect to follow the research model that led to the scientific enterprise of the United States and elsewhere. Rather, we need to adapt and develop technologies appropriate to our local circumstances, help strengthen education, and expand our roles as advisers in both government and industry.”

In his work for the Carnegie Commission, Jimmy Carter made the following observations about global development: “Hundreds of well-intentioned international aid agencies, with their own priorities and idiosyncrasies, seldom cooperate or even communicate with each other. Instead, they compete for publicity, funding, and access to potential recipients. Overburdened leaders in developing countries, whose governments are often relatively disorganized, confront a cacophony of offers and demands from donors.”

My contacts with international development projects in agriculture have made me aware that many experiments are carried out to try to improve productivity. A few are very successful, but many turn out to be failures. The natural inclination is to hide all of the failures. But as every experimental scientist knows, progress is made from learning from what did not work, and then improving the process by incorporating this knowledge into a general framework for moving forward. As scientists, I would hope that we could lead the world toward more rational approaches to improving international development efforts.

The U.S. economy is booming. But as I look around our plush shopping malls, observing the rush of our citizens to consume more and more, I wonder whether this is really progress. In thinking about how our nation can prove itself as the world leader it purports to be, we might do well to consider the words of Franklin Roosevelt: “The test of our progress is not whether we add more to the abundance of those who have much; it is whether we provide enough for those who have little.” As many others have pointed out, every year the inequities of wealth are becoming greater within our nation and around the world. The spread of scientific and technological information throughout the world, involving a generous sharing of knowledge resources by our nation’s scientists and engineers, can improve the lives of those who are most in need around the globe. This is a challenge for science and for NAS.

Something Old, Something New

First, I want to welcome back the National Academy of Engineering as a sponsor of Issues. NAE was an original sponsor and supported the magazine for more than a decade. During a period of transition in its leadership, it suspended its sponsorship, but now that it has regained its equilibrium under the leadership of Wm. A. Wulf, it has renewed its commitment to Issues. Even when NAE was not an active sponsor, Issues addressed the subjects of technology and industry that are of interest to NAE members. With NAE back as an active participant, we should be able to strengthen our coverage of these topics. This issue’s cover stories on the relationship between information technology and economic productivity should be of particular interest to NAE members.

Second, I want to announce an initiative to enhance our online presence. For several years, we have been posting on our website the table of contents and several articles from each issue. Beginning with the fall issue, we will make the entire contents of each issue available online, and we will create a searchable database of back issues. This database will be integrated with the much larger database of publications from the National Academy Press. A search for material about Superfund, for example, will turn up references to National Research Council reports as well as to Issues articles. Our hope is that this resource will be indispensable to public policy researchers.

In addition, we want to transform the Forum section into an active online debate. Forum letters will be posted as soon as they arrive, authors will be encouraged to respond to the letters, and everyone will be invited to participate by commenting on the letters or the original articles. We believe that this feature is particularly valuable for a quarterly publication. It will mean that it’s not necessary to wait three months to hear responses to articles, that realtime policy debate will be possible, and that there need be no space limitations to constrain comment. Forum has always been one of the most popular sections of the magazine, and this can only enhance its value.

Electronic economics

Access to the Issues website will be free. In this, we are following the example of the National Academy Press, which in 1997 put its entire backlist of publications online with free access. Although some worried that this would reduce sales, the opposite occurred. People became interested in what they found online and opted to buy the printed version. Book sales have increased.

It appears that the Internet is a good way to find information but that print is still the preferred way to use it. Reading on the screen is difficult, printing web pages is slow, and bound books and magazines are convenient to hold and store. The time may come when electronic text rivals the printed word for convenience, but it’s not here yet. We expect that web visitors who find Issues useful will want to subscribe to the print version. And for those who can’t afford it or who use it rarely, we will be providing a public service.

The goal of online publishing is not simply to produce the electronic equivalent of the print edition. The true value of the World Wide Web is its unlimited linking ability. An online version can provide much more than a print edition. When an author cites a specific report, a click on the mouse can call that report to the screen. A data reference can be linked to the full set of data from which the reference was drawn. Recommended reading becomes a list of quick links to the full text of the publications. Combined with the capacity for instantaneous online debate, linking makes online publishing much livelier and more interactive.

Finally, we would like to be able to alert you when new material appears on our website. It’s frustrating to pay repeated visits to a site and find nothing new. That’s not likely with the NAS and NAP websites, which are updated regularly, but we would like to make it easier for you to decide when you want to surf in. We plan to develop an electronic mailing list to which we would send alerts announcing the presence of new material on the website. In this way you will know when something of interest is posted without having to take the time to visit the site. If you want to be placed on this list, please send your e-mail address to [email protected]. Eventually, we want to code this list with your interests so that you receive an alert only when it refers to topics that you specify. Although Issues will not be available online for a few months, you can already find an abundance of valuable information at the National Academy of Sciences (www.nas.edu and the National Academy Press (www.nap.edu).

To help us understand how our readers use the Internet (and to update information that is useful to the editorial and business decisions at Issues), we have incorporated into this issue a brief reader survey. We would be very grateful if you would take the time to complete the survey and return it to us by fax or mail. Once we have established an active online presence, surveys such as this will be less necessary. But for now, it’s the best way for us to stay in touch with you. Please respond.

Shaping a Smarter Environmental Policy for Farming

In the summer of 1997, Maryland Governor Parris Glendening suddenly closed two major rivers to fishing and swimming, after reports of people becoming ill from contact with the water. Tests uncovered outbreaks of a toxic microbe, Pfiesteria piscicida, perhaps caused by runoff of chicken manure that had been spread as fertilizer on farmers’ fields. Glendening’s action riveted national attention on a long-overlooked problem: the pollution of fresh water by agricultural operations. When the governor then proposed a ban on spreading chicken manure, the state’s poultry producers lashed back, claiming they would go out of business if they had to pay to dispose of the waste.

The controversy, and others springing up in Virginia, Missouri, California, and elsewhere, has galvanized debate among farmers, ranchers, environmentalists, and regulators over how to control agricultural pollution. The days of relying on voluntary controls and payments to farmers for cutbacks are rapidly ending. A final policy is far from settled, but even defenders of agriculture have endorsed more aggressive approaches than were considered feasible before recent pollution outbreaks.

Agricultural runoff is the primary cause of the degradation of groundwater and surface waters.

Maryland’s proposed ban is part of a state-led shift toward directly controlling agricultural pollution. Thirty states have at least one law with enforceable measures to reduce contamination of fresh water, most of which have been enacted in the 1990s. Federal policy has lagged behind, but President Clinton’s Clean Water Action Plan, introduced in early 1998, may signal a turn toward more direct controls as well. After decades of little effort, state and federal lawmakers seem ready to attack the problem. But there is a serious question as to whether they are going about it in the best way.

The quality of U.S. rivers, lakes, and groundwater has improved dramatically since the 1972 Clean Water Act, which set in motion a series of controls on effluents from industry and in urban areas. Today, states report that the condition of two-thirds of surface water and three-fourths of groundwater is good. But where there is still degradation, agriculture is cited as the primary cause. Public health scares have prompted legislators to take action on the runoff of manure, fertilizer, pesticides, and sediment from farmland.

Although it is high time to deal with agriculture’s contribution to water pollution, the damage is very uneven in scope and severity; it tends to occur where farming is extensive and fresh water resources are vulnerable. Thus, blanket regulations would be unwise. There is also enormous inertia to overcome. For decades, the federal approach to controlling agriculture has been to pay farmers not to engage in certain activities, and agricultural interest groups have resisted any reforms that don’t also pay.

Perhaps the most vexing complication is that scientists cannot conclusively say whether specific production practices such as how manure and fertilizer is spread and how land is tiered and tilled will help, because the complex relationship between what runs off a given parcel of land and how it affects water quality is not well understood. Prescribing best practices amounts to guesswork in most situations, yet that is what current proposals do. Unless a clear scientific basis can be shown, the political and monetary cost of mandating and enforcing specific practices will be great. Farmers will suffer from flawed policies, and battle lines will be drawn. Meanwhile, the slow scientific progress in unraveling the link between farm practices and water pollution will continue to hamper innovation that could solve problems in cost-effective ways.

Better policies from the U.S. Department of Agriculture (USDA), the Environmental Protection Agency (EPA), and state agricultural and environmental departments are certainly needed. But which policies? Because the science to prove their effectiveness does not exist, mandating the use of certain practices is problematic. Paying farmers for pollution control is a plain subsidy, a tactic used for no other U.S. industry. A smarter, incentive-based approach is needed. Happily, such an approach does exist, and its lessons can be applied to minimizing agriculture’s adverse effects on biodiversity and air pollution as well.

Persistent pollution

Farms and ranches cover about half of the nation’s land base. Recent assessments of agriculture’s effects on the environment by the National Research Council (NRC), USDA, and other organizations indicate that serious environmental problems exist in many regions, although their scope and severity vary widely. Significant improvements have been made during the past decade in controlling soil erosion and restoring certain wildlife populations, but serious problems, most notably water pollution, persist with no prospect of enduring remedies.

The biggest contribution to surface water and groundwater problems is polluted runoff, which stems from soil erosion, the use of pesticides, and the spreading of animal wastes and fertilizers, particularly nitrogen and phosphorus. Annual damages caused by sediment runoff alone are estimated at between $2 billion and $8 billion. Excessive sediment is a deceptively big problem: As it fills river beds, it promotes floods and burdens plants for processing municipal drinking water. It also clouds rivers, decreasing sunlight, which in turn lowers oxygen levels and chokes off life in the water.

National data on groundwater quality have been scarce because of the difficulty and cost of monitoring. EPA studies in the late 1980s showed that fewer than 1 percent of community water systems and rural wells exceeded EPA’s maximum contaminant level of pesticides. Fewer than 3 percent of wells topped EPA’s limit for nitrates. However, the percentages still translate into a large number of unsafe drinking water sources, and only a fraction of state groundwater has been tested. The state inventory data on surface water quality is limited too, covering only 17 percent of the country’s rivers and 42 percent of its lakes. A nationally consistent and comprehensive assessment of the nation’s water quality does not exist and is not feasible with the state inventory system. We therefore cannot say anything definitive about agriculture’s overall role in pollution.

Nonetheless, we know a good deal about water conditions in specific localities, enough to improve pollution policy. Important progress is being made by the U.S. Geological Survey (USGS), which began a National Water Quality Assessment (NAWQA) in the 1980s precisely because we could not construct an accurate national picture. USGS scientists estimated in 1994 that 71 percent of U.S. cropland lies in watersheds where at least one agricultural pollutant violates criteria for recreational or ecological health. The Corn Belt is a prime example. Hundreds of thousands of tons of nutrients-nitrogen and phosphorus from fertilizers and animal wastes-are carried by runoff from as far north as Minnesota to Louisiana’s Gulf Coast estuaries. The nutrients cause excessive algae growth, which draws down oxygen levels so low that shellfish and other aquatic organisms die. (This process has helped to create a “dead” zone in the Gulf of Mexico– a several-hundred-square-mile area that is virtually devoid of life.) Investigators have traced 70 percent of the fugitive nutrients that flow into the Gulf to areas above the confluence of the Ohio and Mississippi Rivers. In a separate NAWQA analysis, most nutrients in streams-92 percent of nitrogen and 76 percent of phosphorus-were estimated to flow from nonpoint or diffuse sources, primarily agriculture. USGS scientists also estimated that more than half the phosphorus in rivers in eight Midwestern states, more than half the nitrate in seven states, and more than half the concentrations of atrazine, a common agricultural pesticide, in 16 states all come from sources in other states. Hence those states cannot control the quality of their streams and rivers by acting alone.

Groundwater pollution is another problem. Groundwater supplies half the U.S. population with drinking water and is the sole source for most rural communities. Today, the most serious contamination appears to be high levels of nitrates from fertilizers and animal waste. USGS scientists have found that 12 percent of domestic wells in agricultural areas exceed the maximum contaminant level for nitrate, which is more than twice the rate for wells in nonagricultural areas and six times that for public wells. Also, samples from 48 agricultural areas turned up pesticides in 59 percent of shallow wells. Although most concentrations were substantially below EPA water standards, multiple pesticides were commonly detected. This pesticide soup was even more pronounced in streams. No standards exist for such mixtures.

These results are worrisome enough, and outbreaks of illness such as the Pfiesteria scourge have heightened awareness. But what has really focused national attention on agriculture’s pollution of waterways has been large spills of animal waste from retention ponds. According to a study done by staff for Sen. Tom Harkin (D-Iowa), Iowa, Minnesota, and Missouri had 40 large manure spills in 1996. When a dike around a large lagoon in North Carolina failed, an estimated 25 million gallons of hog manure (about twice the volume of oil spilled by the Exxon Valdez accident) was released into nearby fields and waterways. Virtually all aquatic life was killed along a 17-mile stretch of the New River. North Carolina subsequently approved legislation that requires acceptable animal waste management plans. EPA indicates that as many as two-thirds of confined-animal operations across the nation lack permits governing their pollution discharges. Not surprisingly, a major thrust of the new Clean Water Action Plan is to bring about more uniform compliance for large animal operations.

Dubious tactics

Historically, environmental programs for agriculture have used one of three approaches, all of which have questionable long-term benefits. Since the Great Depression, when poor farming practices and drought led to huge dust storms that blackened midwestern skies, the predominant model for improving agriculture’s effects on the environment has been to encourage farmers to voluntarily change practices. Today, employees of state agencies and extension services and federal conservation agencies visit farmers, explain how certain practices are harming the land or waterways, and suggest new techniques and technologies. The farmers are also told that if they change damaging practices or choose new program X or technology Y, they can get payments from the state or federal government.

Current approaches to limit the environmental effects of agriculture are costly and provide little guarantee of long-term protection.

Long-term studies indicate that these voluntary payment schemes have been effective in spurring significant change; however, as soon as the payments stop, use of the practices dwindles. The Conservation Reserve Program (CRP) now sets aside about 30 million acres of environmentally vulnerable land. Under CRP, farmers agree to retire eligible lands for 10 years in exchange for annual payments, plus cost sharing to establish land cover such as grasses or trees. About 10 percent of the U.S. cropland base has been protected in this way, at a cost of about $2 billion a year.

Although certain parcels of this land should be retired from intensive cultivation because they are too fragile to be farmed, we may be overdoing it with CRP. Some of this land will be needed to produce more food as U.S. and world demand grows. Much of it could be productively cultivated with new techniques, thereby producing profitable crops, reducing water pollution, and costing taxpayers nothing. One of the most prominent new techniques is no-till farming, which is done with new machines that cut thin parallel grooves in soil and simultaneously plant seeds, which not only minimizes runoff but reduces a farmer’s cost. Studies show that no-till farming is usually more profitable than full plowing because of savings in labor, fuel, and machinery.

Evidence suggests that CRP’s gains have been temporary. As with the similar Soil Bank program of the 1960s, once contracts expire, virtually all lands are returned to production. Unless the contracts are renewed indefinitely, most of the 30 million acres will again be farmed, again threatening the environment if farmers fail to adopt no-till practices.

The second approach involves compliance schemes. To receive payments from certain agricultural programs, a farmer must meet certain conservation standards. The 1985 Food Security Act contained the boldest set of compliance initiatives in history. Biggest among them was the Conservation Compliance Provision, which required farmers to leave a minimum amount of crop residues on nearly 150 million acres of highly erodible cropland. In effect, these provisions established codes of good practice for farmers who received public subsidies, and they were a first step toward more direct controls. However, these programs are probably doomed. The general inclination of government and the public to eliminate subsidies led to passage of federal farm legislation in 1996 that includes plans to phase out payment programs by 2002.

The third approach to reducing agriculture’s impact on the environment involves direct regulation of materials such as pesticides that are applied to the land. These programs have been roundly criticized from all quarters. Farm groups complain that pesticide regulation has been too harsh. Environmental groups counter that although the regulations specify the kinds of pesticides that can be sold and the crops they can be used on, they do not restrict the amount of pesticide that can be spread. Even if regulations did specify quantity, enforcement would be virtually impossible. The registration process for pesticide use has also been miserably slow and promises to get slower as a result of the 1996 Food Quality Protection Act, which requires the reregistration of all pesticides against stricter criteria.

In sum, current approaches to limit the environmental effects of agriculture have cost taxpayers large amounts of money with little guarantee of long-term protection. Unless a steady stream of federal funding continues, many of the gains will evaporate. And the idea of paying people not to pollute is becoming increasingly untenable, especially at the state level.

Getting smarter

Four actions are needed to establish a smarter environmental policy for agriculture.

Set specific, measurable environmental objectives. Without quantifiable targets, an environmental program cannot be properly guided. To date, most programs have called for the use of specific farming practices rather than setting ambient quality conditions for surface water and groundwater. This is largely because of political precedent and because of the complex nonpoint nature of many pollution problems. However, setting a specific water quality standard, such as nitrate or pesticide concentration in drinking water, presumes that the science exists to trace contaminants back to specific lands. Such research is currently sparse, although major assessments by the NRC and others indicate that clearer science is possible. Setting standards would help stimulate the science.

Several states are taking the lead in setting standards. Nebraska has set maximum groundwater nitrate concentration levels; if tests shows concentrations above the standard, restrictions on fertilizer use can be imposed. Florida has implemented controls on the nutrient outflows from large dairies into Lake Okeechobee, which drains into the Everglades. In Oregon, environmental regulators set total maximum daily loads of pollutants discharged into rivers and streams, and the state Department of Agriculture works with farmers to reduce the discharges. Voluntary measures accompanied by government payments are tried first, but if they are not sufficient, civil fines can be imposed in cases of excessive damages.

The federal government can support the states’ lead by setting minimum standards for particular pollutants that pose environmental health risks, such as nitrates and phosphorus. The Clean Water Action Plan would establish such criteria for farming by 2000 and for confined animal facilities by 2005. Standards for sediment should be set as well.

There is no easy way around the need for a statutory base that defines what gets done, when it gets done, and how it gets done at the farm, county, state, regional, and national levels. Unless those specific responsibilities are assigned, significant progress on environmental problems will not be made.

Create a portfolio of tangible, significant incentives. Without sufficient incentives, we have little hope of meeting environmental objectives. The best designs establish minimum good-neighbor performance, below which financial support will not be provided, and set firm deadlines beyond which significant penalties will be imposed. Incentive programs could include one-stop permitting for all environmental requirements, such as Idaho’s “One Plan” program, which saves farmers time and money; “green payments” for farms that provide environmental benefits beyond minimum performance; a system for trading pollution rights; and local, state, or national tax credits for exemplary stewardship.

It is important to stress that a silver-bullet approach to the use of incentives does not exist. The most cost-effective strategy for any given farm or region will be a unique suite of flexible incentives that fit state and local environmental, economic, and social conditions. Although the use of flexible incentives can require substantial administrative expense, they can also trigger the ingenuity of farmers and ranchers, much as market signals have done for the development of more productive crops and livestock.

Although incentives are preferable, penalties and fines will still be needed. Pollution from large factory farms is now spurring states and the federal government to apply to farms the strict limits typically set for other industrial factories. Some of these farms keep more than half a million animals in small areas. The animals can generate hundreds of millions of gallons of wastes per year–as much raw sewage as a mid-sized city but without the sewage treatment plants. The wastes, which are stored in open “lagoons” or spread on fields as fertilizer, not only produce strong odors but can end up in streams and rivers and possibly contaminate groundwater. In 1997, North Carolina, which now generates more economic benefits from hog farms than it does from tobacco, imposed sweeping new environmental rules on hog farming. Under the Clean Water Action Plan, EPA is proposing to work with the states to impose strict pollution-discharge permits on all large farms by 2005. EPA also wants to dictate the type of pollution-control technologies that factory farms must adopt.

Because pollution problems are mostly local, states must do more than the federal government to create a mix of positive and negative incentives, although the federal government must take the lead on larger-scale problems that cross state boundaries. Both the states and the federal government should first focus on places a clear agriculture-pollution link can be shown and the potential damages are severe.

Harness the power of markets. Stimulating as much private environmental initiative as possible is prudent, given the public fervor for shrinking government. The 1996 Federal Agriculture Improvement and Reform Act took the first step by dismantling the system of subsidizing particular crops, which had encouraged farmers to overplant those crops and overapply fertilizers and pesticides in many cases. The potential for using market forces is much broader.

One of the latest and most effective mechanisms may be a trading system for pollution rights. A trading system set up under the U.S. Acid Rain Program has been very effective in reducing air pollution, and trading systems are being proposed to meet commitments made in the recently signed Kyoto Protocol to reduce emissions of greenhouse gases.

Trading systems work by setting total pollution targets for a region, then assigning a baseline level of allowable pollution to each source. A company that reduces emissions below its baseline can sell the shortfall to a company that is above its own baseline. The polluter can then apply that allowance to bring itself into compliance. The system rewards companies that reduce emissions in low-cost ways and helps bad polluters buy time to find cost-effective ways to reduce their own emissions.

A few trading systems are already being tried in agriculture. Farms and industrial companies on the North Carolina’s Pamlico Sound are authorized to trade water pollution allowances, but few trades have taken place thus far because of high transaction costs. Experiments are also under way in Wisconsin and Colorado, but the complications of using trading systems for nonpoint pollution will slow implementation.

Pollution taxes can also create incentives for change. Economists have proposed levying taxes that penalize increases in emissions. Some also propose using the proceeds to reward farmers who keep decreasing their emissions below the allowable limit. The tax gives farmers the flexibility to restructure their practices, but political opposition and potentially high administrative costs have hindered development.

It is time to make pollution prevention and control an explicit objective of agricultural R&D policy.

One other market mechanism that is cost-effective and nonrestrictive is facilitating consumer purchases of food that is produced by farmers who use minimal amounts of pesticides and synthetic fertilizers. Food industry reports indicate that a growing segment of the public will pay for food and fiber cultivated in environmentally friendly ways. The natural foods market has grown between 15 and 20 percent per year during the past decade, compared with 3 to 4 percent for conventional food products. If this trend continues, natural foods will account for nearly one-quarter of food sales in 10 years. Because organic foods command higher prices, farmers can afford to use practices that reduce pollution, such as crop rotation and biologically based pest controls.

Government can play a stronger role in promoting the sale of natural foods. It should make sure that consumers have accurate information by monitoring the claims of growers and retailers and establishing production, processing, and labeling standards. One experiment to watch is in New York, where the Wegman’s supermarket chain is promoting the sale of “IPM” foods grown by farmers who have been certified by the state as users of integrated pest management controls.

Stimulate new research and technology. One of the most overlooked steps needed to establish smarter environmental policy for agriculture is better R&D. Most research to date has focused on remediation of water pollution, rather than forward-looking work that could prevent pollution. Over the years, research for environmental purposes should have increased relative to food production research, but it is not clear that it has.

What is most needed is better science that clarifies the links between agricultural runoff and water quality. As stated earlier, this will be forced as regulations are imposed, but dedicated research by USDA, EPA, and state agricultural and environmental departments should begin right away.

R&D to produce better farm technology is also needed. Despite an imperfect R&D signaling process, some complementary technologies that simultaneously enhance environmental conditions and maintain farm profit have emerged. Examples include no-till farming, mulch-till farming, integrated pest management, soil nutrient testing, rotational grazing (moving livestock to different pastures to reduce the buildup of manure, instead of collecting manure), and organic production. Most of these techniques require advanced farming skills but have a big payoff. No-till and mulch-till farming systems, for example, have transformed crop production in many parts of the nation and now account for nearly 40 percent of planted acres. However, these systems were driven by cost savings from reduced fuel, labor, and machinery requirements and could improve pollution control even further if developed with this goal in mind. Integrated pest management methods generally improve profits while lowering pesticide applications, but they could benefit from more aggressive R&D strategies. A farmer’s use of simple testing procedures for nutrients in the soil before planting has been shown to reduce nitrogen fertilizer applications by about one-third in some areas, saving farmers $4 to $14 per acre, according to one Pennsylvania study.

Other technologies are emerging that have unknown potential, including “precision farming” and genetic engineering of crops to improve yield and resist disease. Precision farming uses yield monitors, computer software, and special planting equipment to apply seeds, fertilizers, and pesticides at variable rates across fields, depending on careful evaluation and mapping techniques. This suite of complementary technologies has developed mostly in response to the economic incentive to reduce input costs or increase yields. Their full potential for environmental management has been neglected. It is time to make pollution prevention and control an explicit objective of agricultural R&D policy.

Accountability and smart reform

The long-standing lack of public and legislative attention to agricultural pollution is changing. Growing scrutiny suggests that blithely continuing down the path of mostly voluntary-payment approaches to pollution management puts agriculture in a vulnerable position. As is happening in Maryland, a single bad incident could trigger sweeping proposals–in that case, possibly an outright ban against the spreading of chicken manure on fields–that would impose serious costs on agriculture. A disaster could cause an even stronger backlash; the strict clean-water regulations of the 1970s came in torrents after the Cuyahoga River in Ohio actually caught fire because it was so thick with industrial waste.

The inertia that pervades agriculture is understandable. For decades farmers have been paid for making changes. But attempts by agricultural interest groups to stall policy reforms, including some important first steps in the Clean Water Action Plan, will hamper farming’s long-term competitiveness, or even backfire. Resistance will invite more direct controls, and slow progress on persistent environmental problems will invite further government intervention.

Under the smarter environmental policy outlined above, farmers, environmental interest groups, government agencies, and the scientific community can create clear objectives and compelling incentives to reduce agricultural pollution. Farmers that deliver environmental benefits beyond their community responsibilities should be rewarded for exemplary performance. Those that fall short should face penalties. We ask no less from other sectors of the economy.

Forum – Summer 1998

Climate change

Robert M. White’s “Kyoto and Beyond” and Rob Coppock’s “Implementing the Kyoto Protocol” (Issues, Spring 1998) are excellent overviews of the issues surrounding the Kyoto Protocol. As chairman of the House Science Committee, I have spent a great deal of time analyzing the Kyoto Protocol, including chairing three full Science Committee hearings this year on the outcome and implications of the Kyoto negotiations. And in December 1997, I led the congressional delegation at the Kyoto conference.

The facts I have reviewed lead me to believe that the Kyoto Protocol is seriously flawed-so flawed, in fact, that it cannot be salvaged. The treaty is based on immature science, costs too much, leaves too many procedural questions unanswered, is grossly unfair because it excludes participation by developing countries, and will do nothing to solve the supposed problem it is intended to solve. Nothing I have heard to date has persuaded me otherwise.

Those who argue that the science of climate change is a settled issue should take notice of the National Academy of Sciences’ National Research Council (NRC) Committee on Global Change’s report, entitled Global Environmental Change: Research Pathways for the Next Decade, issued May 19, 1998. The NRC committee, charged with reviewing the current status of the U.S. Global Change Research Program, stated that the Kyoto agreements “are based on a general understanding of some causes and characteristics of global change; however, there remain many scientific uncertainties about important aspects of climate change.” And Appendix C of the report’s overview document lists more than 200 scientific questions that remain to be adequately addressed.

I want to note one major issue not discussed by White or Coppock-the Kyoto Protocol’s impact on the U.S. armed forces. Because the Department of Defense is the federal government’s largest single user of energy and largest emitter of greenhouse gases, the protocol essentially imposes restrictions on military operations, in spite of Pentagon analyses showing that such restrictions would significantly downgrade the operational readiness of our armed forces. In addition, the protocol would hamper our ability to conduct unilateral operations such as we undertook in Grenada, Libya, and Panama. On May 20, 1998, the House resoundingly rejected these restrictions by a vote of 420 to 0 by approving the Gilman-Danner-Spence-Sensenbrenner-Rohrabacher amendment prohibiting any provision of law, any provision of the Kyoto Protocol, or any regulation issued pursuant to the protocol from restricting the procurement, training, or operation and maintenance of U.S. armed forces.

This unanimous vote of no confidence in the Kyoto Protocol follows last summer’s Senate vote of 95 to 0 vote urging the administration not to agree to the Protocol if developing countries were exempted-an admonition ignored by the administration. These two “nos” to Kyoto mean the agreement is in serious trouble on Capitol Hill.

REPRESENTATIVE F. JAMES SENSENBRENNER, JR.

Republican of Wisconsin

Chairman, House Science Committee


Like Rob Coppock, I believe that setting a drop-dead date of 2010 for reducing global CO2 emissions by 7 percent below 1990 levels is unrealistic and even economically unsound. Experience has shown, at least in the United States, that citizens become energy-sensitive only when the issue hits them in the pocketbook. (One need only look at the ever-increasing demand for fuel-guzzling sport utility vehicles that has accompanied the historic low in gas prices.). If prices were raised even to their current level in Europe (about $4 per gallon), I think you would have the makings of profound social unrest in the United States.

If dramatic increases in fuel prices through tax increases are politically difficult (if not impossible), then the only alternative available to governments is the power to regulate-to require the use of processes and products that use less energy and emit less CO2. Germany, as well as most other European nations, has traditionally used high energy costs to encourage consumers to reduce demand and increase efficiency. Germany’s energy use per capita is about half that of the United States, but I don’t think that you find any major differences in standard of living between Germans and Americans. This indicates that there are many ways to improve energy efficiency and thus reduce emissions in the United States.

I prefer slow but steady progress toward reduction of carbon emissions, taking into account both the long- and short-term economic implications of taxation and regulation. As Coppock points out, “the gain from rushing to do everything by 2010 is nowhere near the economic pain.” Just as one sees the “magic” of compound interest only at the end of a long and steady investment program, we can provide a better global climate future for generations yet unborn through consistent actions taken now.

Kyoto is a very significant step in affecting how future generations will judge our efforts to halt global warming at a tolerable level. Germany stands ready to support the spirit of the Kyoto agreement and to help all nations in achieving meaningful improvements in energy use and efficiency. It has offered to host the secretariat for implementing the Kyoto Protocol, as defined in the particulars to be decided in November 1998 by the Conference of the Parties. One can only hope that these definitions recognize some of the points that Coppock and I have raised, especially the economic implications of massive efforts to meet an arbitrary date.

HEINZ RIESENHUBER

Member, German Bundestag

Bonn, Germany

Former German Minister for Science and Technology


The articles by Robert M. White and Rob Coppock support what industry has been saying for years: Near-term actions to limit greenhouse gas emissions are costly and would divert scarce capital from technological innovation. Building policy around technology’s longer time horizon, rather than the Kyoto Protocol’s 10 to 12 years, means that consumers and businesses could rationally replace existing capital stock with more energy-efficient equipment, vehicles, and processes. Avoided costs free up resources for more productive investments, including energy technologies and alternative energy sources.

White notes that the Kyoto Protocol is “at most . . . a small step in the right direction.” Worse, trying to implement it would mean “carrying out a massive experiment with the U.S. economy with unknown results.” What we do know is that all economic models that don’t include unrealistic assumptions indicate negative results.

Most of White’s “useful actions” are on target: pay attention to concentrations, not emissions; adaptation has been and will remain “the central means of coping with climate change;” disassociating costs and benefits attracts free riders; “population stabilization can have an enormous impact on emissions reduction.” Although he’s right to call for more technological innovation, he may have overstated our grasp of climate science when he says that “only through the development of new and improved energy technologies can reductions in greenhouse gas emissions of the necessary magnitude be achieved without significant economic pain.” His closing paragraph is closer to the mark: “If climate warming indeed poses a serious threat to society . . .” Finally, wind, photovoltaics, and biomass are still not economically competitive except in niche markets, nor can companies yet stake their future on hybrid electric or fuel cell cars. These technologies show great promise, but their costs probably will remain high in the time frame defined by the Kyoto Protocol.

Coppock’s most insightful comment about the protocol is that “no credit is given for actions that would reduce emissions in future periods” and this “creates a disincentive for investments” in new technologies. He also puts CO2 emissions in perspective (1850 levels will double by around 2100), rejects the protocol’s timetable (“the gain from rushing to do everything by 2010 is nowhere near worth the economic pain”), and cautions against assuming that a tradable permits regime would be easy to set up (“the trading is between countries. But countries don’t pollute; companies and households do”) or maintain (“how would pollution from electricity generated in France but consumed in Germany be allocated?”).

However, he seems willing to create a large UN bureaucracy to enforce a bad agreement. Moreover, his model, the International Atomic Energy Agency, is a recipe for massive market intervention: IAEA’s implementation regime “of legally binding rules and agreements, advisory standards, and regulations” includes the all-too-common industry experience of governments turning “today’s nonbinding standards [into] tomorrow’s binding commitments.” The Kyoto Protocol goes IAEA one bureaucratic step better: Any government that ratifies the protocol grants administrators the right to negotiate future and more stringent emission targets.

As Coppock concludes, “the world’s nations may be better off scrapping the Kyoto Protocol and starting over.” To which White adds: “Developments in energy technology show promise, and there has been a gradual awakening to this fact.” Both steps are needed if we are to have a dynamic strategy that reflects a wide range of plausible climate outcomes and also gives policymakers room to adjust as new scientific, economic, and technological knowledge becomes available.

WILLIAM F. O’KEEFE

President

American Petroleum Institute

Washington, D.C.


Since the emergence of the Berlin Mandate, the AFL-CIO has been on public record in opposition to the direction in which international climate negotiations have been headed. Upon the conclusion of the Kyoto round, we denounced the treaty but made it clear that, regardless of a flawed treaty, we want to be a part of solving this real and complicated global problem. To that end, we are working with allies who want not only to examine real solutions to climate change but also to address the economic consequences those solutions present for U.S. workers and their communities.

The articles by Rob Coppock and Robert M. White mirror our concerns about the correct approach regarding action on the global climate change issue. We have taken a straightforward position: A global concentration target must be identified so that the entire global community can join in taking specific actions that, in sum, will result in a stable and sustainable outcome; domestic economic considerations must be as important in the overall effort as are environmental ones; and time frames and plans should guarantee a transition that is smooth but mandates that action begin now.

We are certain that technology is part of the answer for reducing our domestic emissions and improving efficiency as well as for avoiding in the developing world the same “dirty” industrial revolution we’ve experienced. We understand that there are finite resources available for this pursuit and that we’d better spend them wisely, in some sensible strategic manner, from the start.

We have only one chance to properly invest our time, energy, and money. A serious commitment to include regular participation by workers will serve this process well. We can become more energy-efficient plant by plant, institution by institution, and workplace by workplace in this country through worker participation. It would be irrational to pursue solutions that did not start first with the “low-hanging fruit” that is available at every U.S. workplace, perhaps without the need for much investment or expense.

We agree that we need a clear strategy more than a quick fix. We need to honor natural business cycles and long-term investment decisions. We should not spend excessively to meet arbitrary deadlines but rationally to meet national strategic objectives. This is a political problem as much as it is an environmental one. We add our voices to the voices of those who will pursue reasonable strategic solutions that include everybody and who will move this process with some urgency.

DAVID SMITH

Director of Public Policy

AFL-CIO

Washington, D.C.


I know Rob Coppock personally and greatly respect his perspectives on science and policy. I am sympathetic to the logic of the arguments he presents in “Implementing the Kyoto Protocol” in terms of taking a more measured and gradual approach to mitigating greenhouse gas emissions, and I agree that careful, well-considered strategies are more likely to produce better long-term results at less cost.

For the sake of further thought and discussion, however, I would like to raise a philosophical point or two, on which I invite Rob and others to comment. Major pieces of environmental legislation passed in the United States in the early 1970s contained ambitious (perhaps even heroic) targets and timetables for pollution abatement that strike me as being very similar in nature to the provisions of the Kyoto Protocol. You will remember that we were to eliminate the smog plaguing our major cities, make our rivers fishable and swimmable, and so on, all in short order (generally by the mid-1980s, as I recall). Was an awful lot of money spent? Yes. Was money wasted? Most certainly. Were the targets and timetables met? Hardly ever. Were the laws flawed? Yes. (Witness the continuing amendments.) Was it the right thing to do at the time? This is the critical question, and despite all the criticisms of these laws raised over the past quarter century and more, I would still answer, “Yes. Most definitely.”

What those early expressions of public policy such as the Clean Air Act of 1970 and the Water Pollution Control Act of 1972 did was not just reduce pollution (which they indeed in some measure accomplished), they also changed the trajectory of where we were headed as a society, both physically, in terms of discharges and emissions, and mentally, in terms of our attitudes toward the levels of impact on our environment we were willing to accept.

I am much concerned about this same issue of trajectory when it comes to global warming. Greenhouse gas emissions continue their inexorable increase, and every study I read projects growth in energy demand and fossil fuel use in industrialized nations, as well as explosive growth in the developing world. I am concerned that this tide will swamp plans based on otherwise worthy concepts such as “waiting to install new equipment until old equipment has come to the end of its useful life.” (I heard similar arguments made concerning acid rain and other environmental issues, yet the genius of our engineers managed to bless a lot of this old equipment with almost eternal life.)

Sometimes good policy is more than carefully orchestrated and economically optomized plans and strategies. Sometimes there has to be a sense of vision, a “call to arms,” and maybe even seemingly impractical targets and timetables. If climate change is real, now may be one of those times.

MARTIN A. SMITH

Chief Environmental Scientist

Niagara Mohawk Power Corporation

Syracuse, New York


Rob Coppock’s thoughtful article faults the Kyoto Protocol for its emphasis on near-term targets to the exclusion of more fundamental changes that could enable us to ultimately stabilize global concentrations of greenhouse gases. Without some remarkable breakthroughs at this November’s Buenos Aires Conference of the Parties, Coppock envisions that the Kyoto Protocol will prove very costly to implement; will, even if implemented, do relatively little to slow the steady rise in global concentrations of greenhouse gases; and will be unlikely to be ratified by the U.S. Senate.

A more fundamental shortcoming of the Kyoto Conference may have been the failure to create a level playing field for emerging green energy technologies and to provide near-term market incentives to producers of transformational energy systems. Industrialized countries left Kyoto without committing to phase out multibillion-dollar yearly subsidies to domestic fossil fuel industries or to shift the roughly $7 billion of annual direct government investment in energy R&D in OECD countries to provide more than a pittance for renewable energy sources or efficiency. Even in the face of evidence that an energy revolution may be under way as profound as that which between 1890 and 1910 established the current system of grid-based fossil fuel electricity and gasoline-fueled cars, no effort was made to aggregate OECD markets for green energy or, aside from an ill-defined Clean Development Mechanism, to provide inducements for such applications in developing countries. The Clinton administration’s promising proposal to provide about $6.3 billion over five years in tax and spending incentives to promote greenhouse-benign technologies in the United States has foundered in Congress on the grounds that this would be backdoor implementation of a not-yet-ratified protocol.

Even if universally ratified by industrialized countries and fully implemented, the Kyoto Protocol will make only a small dent in the continuing rise in global greenhouse concentrations that is driving climate change. Stabilization of greenhouse concentrations would require about a 60 percent global reduction in CO2 emissions below 1990 levels; even if the Kyoto Protocol is fully implemented, global CO2 emissions are likely to rise, according to a U.S. Energy Information Administration analysis, to 32 percent above 1990 levels by 2010. The radical reductions required for climate stabilization will require a very different model than that established in Kyoto.

Achieving climate stabilization will occur not through global environmental command and control but by emulating the investment strategies of the information and telecommunications revolutions. Some characteristics of emerging green technologies, especially photovoltaics, fuel cells, wind turbines, and micropower plants, could mirror the information technology model of rapid innovation, mass production, descending prices with rising volume, and increased market demand. Major firms such as Enron, BP, and Shell, and governments such as those of Denmark and Costa Rica, have began to glimpse these possibilities. The challenge of future climate negotiations is to develop policies to reinforce this nascent green energy revolution, which may ultimately deliver clean energy at prices lower than those of most fossil fuels.

JOHN C. TOPPING, JR.

President

Climate Institute

Washington, D.C.


Rob Coppock puts his finger on the critical point: Concentrations of greenhouse gases are closely coupled to climate change; emissions are not. It is cumulative emissions over decades that will shape the future concentration of CO2, the principal greenhouse gas.

Finding a way to make sure that global emissions peak and then begin to decline will be a great challenge. Perhaps more daunting still will be the challenge of eventually reducing emissions enough to achieve atmospheric stabilization. Stabilization of CO2 at 550 parts per million by volume (ppmv) means reducing per capita emissions at the end of the 21st century to approximately half a metric ton of carbon per person per year. The trick is to do this while maintaining and improving the standard of living of the developed nations and raising the standard of living of the developing nations. Per capita emissions in the United States are approximately five metric tons of carbon per person per year, and only some of the developing nations are presently able to claim to have emissions at or below one-half metric ton per person per year. Even if the world eventually defines 750 ppmv to be an acceptable concentration, global per capita emissions must be only one metric ton per person per year.

Ultimately, something beyond Kyoto is needed: a strategy to preserve a specific concentration ceiling. The strategy needs two pieces-a policy that will clearly indicate that in the future emissions will peak and decline, and a strategy for delivering technologies that will lower net carbon emissions to the atmosphere. Delivering technologies that will enable humans to provide the energy services needed to give economic prosperity to the entirety of Earth’s populatio of 10 billion or so, while releasing less carbon per capita than at present, will require more than just the best available technologies of today. It will require a commitment to R&D, including the development of technologies to enable the continued growth of fossil fuel use in a carbon-constrained world.

Defining and building a research portfolio whose size and composition will deliver the next generation of energy technologies and lay down the foundations for future technologies is a critical task for the years ahead. It cannot be undertaken by a single agency, firm, or institution acting alone. It requires an international public-private partnership committed for the long term. Several international efforts are beginning to take shape: the Climate Technology Initiative, announced in Kyoto by the United States and Japan; the IEA Greenhouse Gas R&D Programme; and the Global Energy Technology Strategy Project to Address Climate Change. They offer hope real hope that our grandchildren will inherit a prosperous world with limited atmospheric CO2.

JAE EDMONDS

Senior Staff Scientist

Battelle Memorial Institute

Washington, D.C.


Robert M. White and Rob Coppock overstate the difficulty of meeting the Kyoto emission targets. Climate protection is not costly but profitable, because saving fuel costs less than buying fuel. No matter how the climate science turns out or who goes first, climate protection creates not price, pain, and penury, but profits, innovation, and economic opportunity. The challenge is not technologies but market failures.

Even existing technologies can surpass the Kyoto CO2 targets at a profit. For example, contrary to Coppock’s bizarre concept for retrofitting commercial buildings, conventional retrofits coordinated with routine 20-year renovation of large office towers can reduce their energy use by about 75 percent, with greatly improved comfort and productivity. Just retrofitting motor and lighting systems can cheaply cut U.S. electricity use in half.

On the supply side, today’s best co- and trigeneration alone could reduce U.S. CO2 emissions by 23 percent, not counting switching to natural gas or using renewables. All these strategies are widely profitable and rapidly deployable today. In contrast, nuclear fission and fusion would worsen climate change by diverting investment from cheaper options, notably efficient end use.

Just saving energy as quickly as the United States did during 1979-86, when gross domestic product rose 19 percent while energy use fell 6 percent, could by itself achieve the Kyoto goals. But this needn’t require repeating that era’s high energy prices; advanced energy efficiency is earning many firms annual returns of 100 to 200 percent, even at today’s low and falling prices. Rapid savings depend less on price than on ability to respond to it: Seattle’s electric rates are half those of Chicago, yet it is saving electrical loads 12 times faster than Chicago because its utility helps customers find and buy efficiency.

The Kyoto debates about carbon reduction targets are like Congress’s fierce 1990 debates about sulfur reduction targets. What mattered was the trading mechanism used to reward sulfur reductions-the bigger and sooner, the more profitable. Now sulfur reductions are two-fifths ahead of schedule, at about 5 to 10 percent of initial cost projections. Electric rates, feared to soar, fell by one-eighth. The Kyoto Protocol and U.S. climate policy rely on similar best-buys-first emissions trading. But trading carbon will work even better than trading sulfur: It will rely mainly on end-use energy efficiency (which could not be used to bid in sulfur trading), and saving carbon is more profitable than saving sulfur.

Kyoto’s strategic message to business-carbon reductions can make you rich-is already changing business behavior and hence climate politics. Leading chemical, car, semiconductor, and other firms are already behaving as if the treaty were ratified, because they can’t afford to lose the competitive advantage that advanced energy productivity offers. The profit-driven race to an energy-efficient, commercially advantageous, climate-protecting, and sustainable economy is already under way.

AMORY B. LOVINS

Director of Research

Rocky Mountain Institute

Snowmass, Colorado


In my view, not one of the articles on global warming in the Spring 1998 Issues puts this potentially disastrous global problem in meaningful perspective. Robert M. White comes closest with his point that “Only through the development of new and improved energy technologies can reductions in greenhouse gas emissions of the necessary magnitude be achieved.” However, none of the technologies he lists, with one exception, can provide a major solution to the problem.

In the next half century, world energy needs will increase because of increases in world population and living standards. If the projected 9.5 billion world population in 2050 uses an average of only one-third of the per capita energy use in the United States today, world energy needs will triple. The only available energy source that can come close to providing the extra energy required without increasing greenhouse gas emissions is nuclear power. Solar power sounds wonderful, but it would take 50 to 100 square miles of land to produce the same power as one large nuclear or coal plant built on a couple of acres. A similar situation exists with wind power. Fusion too could be wonderful, but who can predict when or if it will become practical.

In view of the coming world energy crunch, we should be working on all of these technologies, and hopefully major advances can be made. But is it responsible to let the public think we can count on unproven technologies? And is it responsible to imply that we can solve the problem by emissions trading or other political approaches, as suggested by Rob Coppock and Byron Swift?

I respect the qualifications of the three authors I refer to above, and I don’t necessarily disagree with the points they make. But in terms of educating and providing perspective to readers of Issues who are not expert in energy issues, the articles do a disservice. In principle, nuclear energy could, over the next 50 years, provide the added world energy needed without greenhouse gas emissions. But in this country nuclear energy is going downhill because the public doesn’t understand its need and value. This situation results from the antinuclear forces, the so-called environmentalists, who have misled the public and our administration. But aren’t we technologists also to blame for not informing the public about the overall problem and its one effective solution?

If we continue on our present course and the greenhouse effect is real, our children and grandchildren who will suffer can look back and blame the anti-nukes. But will they, should they not, also blame us?

BERTRAM WOLFE

Monte Sereno, California

Former vice president of General Electric and head of its nuclear energy division


Patent reform pending

The title of Skip Kaltenheuser’s article “Patent Nonsense” aptly describes its contents (Issues, Spring 1998). Kaltenheuser alleges that a provision to create “prior user rights” would undermine the patent system. Let’s begin by explaining that the concept refers to a defense against a charge of patent infringement. This defense is available only to persons who can prove they made the invention at least one year before the patentee filed a patent application and who also actually used or sold the invention in the United States before the patentee’s filing date.

I should point out that the notion of a prior use defense is not unprecedented. The 1839 Patent Act provided that “every person . . . who has . . . constructed any newly invented . . . composition of matter, prior to the application by the inventor . . . shall be held to possess the right to use . . . the specific . . . composition of matter . . . without liability therefore to the inventor.” Moreover, like H.R. 400 and S. 507, prior use under the 1839 act did not invalidate the patent. Even today there is a form of prior use defense for a prior inventor who has not abandoned, suppressed, or concealed a patented invention, or where the patented invention is in public use or on sale more than one year before the U.S. filing date of the patent.

Fundamentally, a prior use defense is needed because no company, large or small, can afford to patent every invention it makes and then police the patents on a global basis. Where inventions are kept as trade secrets to be used in U.S. manufacturing plants, these inventions are job creators for U.S. workers. However, a risk exists that a later inventor may obtain and enforce a patent that can disrupt the manufacturing process. In almost one-half of such cases, the later inventor will be a foreign patent holder. Kaltenheuser draws on a former patent commissioner’s suggestion of how to avoid this risk: Simply publish all your unpatented manufacturing technology so no one can patent it. I don’t know about you, but publishing this country’s great storehouse of technological know-how without any kind of protection, provisional or otherwise, so that its foreign competitors can use it to compete against U.S. workers in U.S. manufacturing plants doesn’t strike me as a terribly good idea.

That brings me to another point-the prior use defense is a perfectly legal “protectionist” exception to a patent. All of our trading partners use prior user rights for just this purpose. Nearly half of all U.S. patents are owned by foreigners. The prior use defense will mean that these later-filed foreign-owned patents cannot be used to disrupt U.S. manufacturing operations and take U.S. jobs. To qualify for the defense, the prior commercial use must be in the United States; a use in Japan or Europe will not qualify.

Finally, it should be noted that the prior use defense is just that-a defense in a lawsuit. The person claiming to be a prior user must prove this in a court of law, and anyone who alleges such a defense without a reasonable basis will be required to pay the attorney’s fees of the patentee.

There are more safeguards in H.R. 400 and S. 507 than space permits me to cover in this letter, but suffice it to say that there have been 16 days of congressional hearings on all aspects of these bills and two days of debate on the House floor. The legislation is supported by five of the six living patent commissioners and by a vast number of large and small high-tech U.S. companies who rely on the patent system, create jobs in the United States, and contribute to our expanding economy. H.R. 400 and S. 507 will strengthen the patent system and allow us to continue our prosperity in the 21st century.

REPRESENTATIVE HOWARD COBLE

Republican of North Carolina


“Patent policy isn’t a topic that lends itself to the usual sausage-making of Congress.” Skip Kaltenheuser’s concluding statement, coupled with material headlined, “The bill’s bulk obfuscates,” captures the essence of recent convoluted attempts to legislate changes in the U.S. patent system.

The U.S. patent system was designed to enable inventors to disclose their secrets in return for the exclusive right to market their innovation for a period of time. There are many in government, industry, and academia who fail to appreciate this. They do not understand that disclosure helps the economy by putting new ideas in the hands of people who, for a fee to the patent holder, find novel and commercially applicable uses for these ideas. Meanwhile, exclusive use of the innovation by the inventor provides a huge incentive for inventors to keep inventing.

Legislation to change the patent system has been pushed in the past three Congresses by certain big business interests, domestic and foreign. Currently, the Senate is considering S. 507, the Omnibus Patent Act of 1997. This measure is intended to harmonize our patent standards with those of foreign systems. I am opposed to this bill, and its House of Representatives companion bill H.R. 400, because they contain several elements that will damage the innovative process and sacrifice our nation’s status as the global leader in technology-driven commerce. Many Nobel laureates in science and economics agree with me.

I appreciate Kaltenheuser’s perspective that the heart of this proposed legislation is a provision to create prior user rights, which encourage corporations to avoid the patent process altogether. To me, it makes sense that under current law, companies that rely on unpatented trade secrets run the risk that someone else will patent their invention and charge them royalties. What doesn’t make sense is that the Senate and House could consider, much less pass, legislation that would permit companies whose trade secrets are later patented by someone else to continue to market their products without paying royalties. Encouraging companies to hide secrets is the opposite of what is needed in an economy that relies on information.

As Kaltenheuser states, “The more closely one looks at the bill, the more its main thrust appears to be an effort by companies at the top to pull the intellectual property ladder up after them.” I’m certain that the void created will destroy the small inventor, substantially harm small business, and reduce U.S. technological innovation.

We must do all we can to preserve the rights and incentives of individuals, guaranteeing that they have ownership of, and the ability to profit from, their endeavors as the Constitution mandates. We must not rush to drastically alter our tested patent system in ways that would produce unsought, unforeseen, and unwelcome consequences.

REPRESENTATIVE DANA ROHRABACHER

Republican of California


As Skip Kaltenheuser points out in his excellent article, the bill S.507, now in the Senate, will discourage the flow of new inventions that are essential to our country’s advancement of science and technology. The bill’s proponents make arguments for this legislation and its prior user rights provision but cite no real-life cases showing that such a dramatic change in our patent system is necessary.

Considering the damage such legislation would do, as evidenced by the opposition of two dozen Nobel laureates, the Association of American Universities, the Association of University Technology Managers, and the Intellectual Property Law Institute, it is incumbent on those proponents to show an overwhelming need for this legislation. They have not done so. If a company doesn’t want to file a patent application on every minor invention, in order to protect itself against patent applications later filed by others, all it needs to do is publish that invention anywhere in the world in any language.

Under S.507, even after a patent has issued, a large company could, after initiating a reexamination procedure in the Patent Office, appeal a decision made by the examiner to the Board of Appeals in favor of the patent. If the Board of Appeals also decides in favor of the patent, the large company could then appeal to the Court of Appeals in the Federal Circuit. All this extra and unnecessary legal work required of the patentee would cost him or her hundreds of thousands of dollars, so that many laudable inventions would be abandoned or given away to the deep-pocketed adversary. The U.S. patent system should not operate solely in favor of the multinationals, forcing universities, individual inventors, and start-up companies out of the patent system.

DONALD W. BANNER

Washington, D.C.

Former patent commissioner


The patent bill that the U.S. Senate is considering, S.507, modernizes America’s two-century-old patent law to bring it into the information age. The bill is strongly supported by U.S. industry, venture capitalists, educators, and the Patent Office. Opposing modernization are many attorneys who, frankly, benefit from the status quo. The opponents rally support against modernization by characterizing it as a sellout to industry and by making the claim that the laws that built America should not be changed.

The vast majority of America’s inventive genius is not patented. It is kept as trade secrets, and for good reason. A U.S. patent protects an invention only in America. When a U.S. patent is granted and published, that invention can be freely and legally copied anywhere else in the world. In most cases, trade secrets are the only effective way to protect internal manufacturing processes from being copied, and those processes are absolutely critical to maintaining our competitive position in a global economy. Indeed, protecting our trade secrets is why we worry about industrial espionage and why we don’t let competitors, especially foreign competitors, see everything they would like to see in our factories. If we were unable to keep trade secrets, we would be making a free gift of U.S. technology to the rest of the world.

In spite of this obvious truth, America’s right to have trade secrets is under powerful attack by some attorneys. Skip Kaltenheuser advances the “patent it or lose it” theory, which says that because they failed to get patents, the owners of trade secrets should be vulnerable to losing their businesses. This heavily promoted theory is based on the rather far-fetched premise that the primary purpose of patent law is to force inventors and innovators to publish the details of their technology. Under this theory, anyone who invents and fails to publish or patent the invention should lose it. To make this theory work, it is necessary to discard our cherished notion that patents should go only to first inventors. The “patent it or lose it” theory uses patents as an enforcement tool – a kind of prize awarded to people who “expose” the trade secrets of others. It would permit people who are clearly not first inventors to openly and legally patent the trade secrets of others. The new patent owner would then have the right to demand royalties or even shut down the “exposed” trade secret owner. Under this theory, the trade secrets used to make the local microbrew or even Coca Cola could legally be patented by someone else. And, of course, so could the millions of inventions and innovations that are routinely used in U.S. factories.

Existing U.S. patent law contains wording that can be interpreted (I would say misinterpreted) to give support to the “patent it or lose it” argument advanced by Kaltenheuser. The problem lies in section 102(g), which says that an inventor is entitled to a patent unless “before the applicant’s invention thereof, the invention was made in this country by another who had not abandoned, suppressed or concealed it.”

That word “concealed” is the culprit. The intention behind the wording is laudable. It is designed to ensure that inventions are used to benefit the public and that someone’s inventive work which was long buried and forgotten cannot be brought up later to invalidate the patent of another inventor who commercializes the invention and is benefiting the public. What some attorneys are now claiming, however, is that “concealed” should apply to any unpublished invention, without regard to whether or not it is being used to benefit the public. In other words, any inventive trade secret is fair game to be patented by someone who can figure it out.

The best solution to this problem is to do what most other counties have done. They protect their inventors, entrepreneurs, and trade secrets with what they call prior user rights laws. In principle, a prior user right law provides the same kind of grandfather protection that exists in many U.S. laws. It lets businesses keep doing what they were doing even though someone comes along later and somehow manages to get a patent on their trade secret.

Title IV, the “prior domestic commercial use” section of S.507, is a very carefully worded and restricted form of prior user rights. It provides an elegant win-win solution when a patent is granted on someone else’s commercial trade secret. The bill says that if the trade secret user can prove that he was using his technology to benefit the public with goods in commerce and that he was doing these things before the patentee filed his patent, then he may continue his use. The bill contains other restrictions, including the requirement that the trade secret owner be able to prove that he was practicing the technology at least a year before the patentee filed his patent. This simple solution will make many of today’s bitter legal battles over patents unnecessary. Because a prior user need only meet the required proofs, it will no longer be necessary to attack and defend the patent’s validity. The obvious benefit to small business is such that most of the major small business organizations have come out in support of S.507.

S.507 will help stem the astronomical growth in legal fees being paid by U.S. manufacturers to protect their intellectual property. And, pleasesing the Constitutional scholars, S.507 will restore patents to their intended purpose-incentivizing technology and progress, not taking the technology of others.

BILL BUDINGER

Chairman and CEO

Rodel, Inc.

Newark, Delaware


“The bill [S. 507] was designed not for reasoned debate of its multiple features but for obfuscation,” charges Skip Kaltenhauser. Yet the record behind the comprehensive patent reform bill extends back to the 1989 report from the National Research Council and the 1992 report from the Advisory Commission on Patent Law Reform (itself based on 400 public comments), not to mention 80 hours of hearings over three Congresses. To the contrary, this bill is a model of transparency. Does Kaltenhauser not know this history or does he deliberately disregard it? And how can he characterize as “nonsense” a bill supported by every living patent commissioner save one?

Repudiating all the errors and misleading statements in Kaltenhauser’s article would take more space than the original. A partial list of whoppers:

  1. The concluding sentence, “Let’s take time to consider each of the proposed changes separately and deliberately,” carries the false implication that this has not already been done, when demonstrably it has. Each major component was originally introduced as a separate free-standing bill. Kaltenhauser seems to be unaware that the Senate previously passed a prior user rights bill (S. 2272, in 1994). Where was he then?
  2. “There is also a constitutional question. Most legal scholars . . . interpret the . . . provision on patents as intending that the property right be ‘exclusive.'” First, the proposed prior user right (S. 507, Title IV) would create only a fact-specific defense that could be asserted by a trade secret-holding defendant, who would have to meet the burden of proof in establishing that he or she was the first inventor, before the patent holder. This fact-specific defense would no more detract from exclusivity than does the more familiar fact-specific, limited defense of fair use in copyright.

All the supposed consequences of a (nonexistent) general derogation to the patent right therefore simply cannot occur. Moreover, every other major nation already has enacted such a defense, although you would never know that from reading Kaltenhauser. Nor would you learn that the extant right is rarely invoked in litigation-in France and Germany, seven cases each over two decades; in England and Italy, no recorded cases. What the provision does is to replace high-stakes litigation where the only certainty is a harsh result-a death penalty for the trade secret, or occasionally for the patent-with a grandfather clause that leads to licensing as appropriate.

Second, Congress emphatically does have the power to create a general limitation on rights (nonexclusivity) if it so chooses. The Constitution grants a power to Congress, not a right to individuals (a point often misconstrued), and the greater power to create exclusive rights logically implies the lesser power to create nonexclusive rights that reach less far into the economy, as preeminent copyright scholar Melville Nimmer always made clear. Congress first created such nonexclusive copyright rights under the same constitutional power in 1909, and the U.S. Copyright Act today has more such limitations than any other law in the world. Claims of unconstitutionality are frivolous.

The first consideration-that a prior user right is a specific, not general, limitation of rights in the first place, carrying no loss of exclusivity-of course totally disposes of the constitutional objection. Yet the larger bogus claim needs to be demolished, as the charge of unconstitutionality carries emotional freight and will be accepted by the unsuspecting.

  1. “Entities that suppress, conceal or abandon a scientific advance are not entitled to patent or other intellectual property rights. It is the sharing of a trade secret that earns a property right.” Did Kaltenhauser read the Restatement of Torts, the Model Trade Secrets Act, or the Economic Espionage Act? The well-established general rule is that trade secret protection flows to anything that confers a competitive advantage and is not disclosed; and when the proprietor decides to practice the technology (if it is that) internally, no loss of rights applies. Trade secrets are not suspicious; to the contrary, Congress legislated federal protection in 1996, in the face of widespread espionage by foreign governments.

Companies often face difficult decisions as to which remedy to choose-patent or trade secret. According to the late Edwin Mansfield, companies choose patents 66 to 84 percent of the time. When they pass up trying for a patent, they do so for one of two basic reasons. First, to avoid undetectable infringement of inside-the-factory process technologies such as heat treatment. Second, to avoid outrageous foreign patent fees that are designed to make a profit off foreign business.

Faced with the same fees, the bill’s opponents often take a self-contradictory posture, giving up on filing abroad (the only path to get protection anywhere in the world), then bemoaning their lack of protection. The bill’s supporters are working hard to reduce these outlandish fees, thus making it more possible for all U.S. inventors to file for patents abroad.

DAVID PEYTON

Director, Technology Policy

National Association of Manufacturers

Washington, D.C.


Skip Kaltenheuser attacks the proposal to modernize our patent law. On the other hand, virtually all of U.S. industry, almost all former patent commissioners, and many successful U.S. inventors support the Omnibus Patent Act of 1997, S. 507, because it will provide increased intellectual property protection for all inventors and those who put technology to use, whether or not it is the subject of a patent (most U.S. innovators and businesses do not have patents). Our patent law, written two centuries ago, today puts U.S. inventors and industry at a global disadvantage. The modernization bill addresses those problems and also modernizes the patent office so it can keep pace with the rapid development of new technology and resulting patent applications.

Foreign entities now obtain almost half of U.S. patents, and they have the right to stop U.S. innovators from using any of the technology covered by those patents. Patents, no matter how obtained or however badly or broadly written, carry the legal presumption of validity, and challenging them in court can cost millions in legal fees. The modernization bill provides an inexpensive and expert forum (the patent office itself) for adjudicating questions about the validity of inappropriately obtained patents.

Kaltenheuser offers emotional quotes from people he claims oppose the bill. He cites an open letter signed by Nobel laureates. One of the signatories to that letter, Stanford University physics professor Douglas Osherhoff, wrote the Senate Judiciary Committee to say, “my name was placed on that letter contrary to my wishes, and it is my expectation that it [S. 507] will indeed improve upon existing patent regulations.” Similarly, Nobel laureate Paul Berg asked that the opponents of S. 507 stop using his name because he supports the bill: “Indeed, I believe [the Omnibus Patent] bill offers improvements to the procedures for obtaining and protecting American inventions.” And in spite of Kaltenheuser’s claim that the patent bill will dry up venture capital, the National Venture Capital Association supports the bill.

Successful manufacturing depends on confidential proprietary technology-trade secrets. Kaltenhauser’s proposal to eliminate from the bill the prior user defense against patent infringement would continue to punish companies (and individuals) who invest scarce resources to develop technologies independently, do not publish or patent them, and put them to use before others have filed a patent application on these same technologies. Such disincentives to investment and risk-taking are clearly counter to sound economic policy and would allow the ultimate patent recipient to force those innovators and companies to pay royalties on their own independently developed technologies, or even to stop using them altogether. A prior user defense would prevent this. Also, the impact on the patent holder is minimal, since, apart from the entity that successfully asserts the prior user defense, the patent holder can still collect royalties from any other users of the technology.

Kaltenheuser quotes former Patent Commissioner Donald Banner as saying that the only thing companies have to do is publish all their technology, and then it can’t be patented. But why should manufacturers have to publish their trade secrets so their competitors can use them, or be required to get patents just to establish the right to keep using their own innovations?

If we do not reform our patent system and U.S. companies have to publish or patent everything they do, our leading-edge technology and manufacturing will be driven offshore. We need the United States to be a safe place for creating intellectual property and putting it to work. Most foreign countries protect their native technology and industries by allowing trade secrets and prior user rights. We should do the same.

TIMOTHY B. HACKMAN

Director of Public Affairs, Technology

IBM Corporation

Chair

21st Century Patent Coalition

Washington, D.C.


Utility innovation

Like Richard Munson and Tina Kaarsberg (“Unleashing Innovation in Electricity Generation,” Issues, Spring 1998), I too believe that a great deal of innovation can be unleashed by restructuring the electric power industry. Some transformation is likely to occur simply because electricity generators will, for the first time in nearly a century, begin to compete for customers. Much more change can be stimulated, however, if we draft national energy-restructuring legislation that fosters rather than stifles innovation.

In drafting my electricity-restructuring legislation (S.687, The Electric System Public Benefits Act of 1997), I was careful to construct provisions that accomplished the direct goal of emissions reductions while stimulating innovation at the same time. One provision requires that a retail company provide disclosure regarding generation type, emissions data, and the price of its product so that consumers can make intelligent decisions regarding their electric service providers. With verifiable information available, many customers will choose to buy clean power. In fact, firms that are currently marketing green energy in California’s competitive market are banking on the fact that people will opt for green power. This consumer demand is likely to increase production of new supplies of renewable energy, a sustainable, clean product.

Another provision would establish a national public benefits fund, whose revenues would be collected through a non-bypassable, competitively neutral, wire charge on all electricity placed on the grid for sale. Money from this fund would be available to states for R&D and to stimulate innovation in the areas of energy efficiency, demand-side management, and renewable energy.

Yet another provision establishes a cap and trading program for nitrogen oxide emissions. This provision would put in place a single, competitively neutral, nationwide emission standard for all generators that use combustion devices to produce electricity. Currently, older generation facilities do not face the same tough environmental standards as new generation facilities, and the nitrogen oxide emission rates of utilities vary by as much as 300 percent. The older facilities continue to operate largely uncontrolled and thus maintain a cost advantage over their cleaner competitors. With this provision, the older firms would be forced to upgrade to cleaner generation processes or shut down.

The three provisions I outline above are just a selection of the innovation-stimulating measures in my bill, and the measures in my bill are just a selection of the measures included by Munson and Kaarsbergin their insightful article. Congress should carefully consider including many of these proposals when it passes national energy-restructuring legislation.

SENATOR JAMES M. JEFFORDS

Republican of Vermont


Richard Munson and Tina Kaarsberg do a fine job of describing the technological advances in store for us upon electricity deregulation. I think we share the view that no issue is more important than deregulating the electric industry in such a way that technological advances, whether distributed or microgeneration, silicon control of power flows on the grid, or efficiency in fuel burning and heat recapture are given the best possible chance.

But for that very reason, I’m reluctant to praise the drive for mandatory access as the fount of new technology. It isn’t that we need programs like the Public Utilities Regulatory Policy Act and mandatory access to force what amounts to involuntary competition across a seized grid. Instead, government must strive to remove legal impediments to voluntary competition and allow markets to deliver competition on their own, instead of instituting an overarching federally regulated structure to manage transmission and distribution.

In electricity, the primary impediment to competition is not the lack of open access but the local exclusive franchise, usually in the form of state-level requirements that a producer hold a certificate of convenience and necessity in order to offer service.

In a free market, others should have every right to compete with utilities, but how they do so is their problem. But the problem is not insurmountable. (Several Competitive Enterprise Institute publications explore the theme of a free market alternative to mandatory open access; see www.cei.org.)

For reform to foster technological advances fully, the size of the regulated component must shrink, not grow, as it may under open access. Mandatory access can itself discourage the development of some important new technologies by tilting the playing field back toward central generation. As evidence of this, energy consultants are advising clients not to bother with cogeneration because open access is coming; and breakthough R&D on the microturbines we all love is hindered by regulatory uncertainty.

Ultimately, reformers must acknowledge the fundamental problem of mandatory open access: A transmission utility’s desire to control its own property is not compatible with the desire of others to hitch an uninvited ride. No stable regulatory solutions to this problem exist.

I believe the authors would find that the technological advances they anticipate are best ensured not by imposing competition but by removing the artificial impediments to it.

CLYDE WAYNE CREWS, JR.

Director, Competition and Regulation Policy

Competitive Enterprise Institute

Washington, D.C.


Richard Munson and Tina Kaarsberg present a clear vision of where power generation could go if innovation were unleashed and institutional barriers remanded. The electric restructuring now underway in California deals with many of the issues they raise.

The California Energy Commission has been a strong advocate for market economics and consumer choice. We have supported CADER, the California Alliance for Distributed Energy Resources, and we are supporting the largest state-funded public interest research program to spur innovation in the industry. Because the electric industry is highly dependent on technology, I believe that industry players who wish to become leaders will voluntarily invest in R&D to provide consumer satisfaction. Since the start of restructuring, numerous investors have approached the commission with plans to build the new highly efficient, low-emission power plants cited in the article. These facilities will compete in the open market for market share. Although California’s installed capacity is extremely clean, its efficiency needs improvement. As new facilities are constructed, such as one recently completed 240-megawatt facility operating at 64.5 percent efficiency with extremely low air emissions, they will bring competitive and innovative solutions into our market.

Despite all the optimism about new generating facilities, regulatory barriers such as those described by Munson and Kaarsberg continue to inhibit the most innovative approaches, especially those in the area of distributed generation. I strongly support their call for consideration of life-cycle emissions determination and for output-based standards. Too many regulators don’t understand the need to take into account the emissions produced by the system as a whole. In addition, emissions created when equipment is manufactured and fuels are produced are often overlooked. Another area of concern is the repowering of existing sites. Those sites and related transmission corridors have extensive associated investments in infrastructure that may be lost if environmental rules do not allow for rational cleanup and reuse.

Electricity generation and reduced air emissions represent only half of the available opportunities in a restructured industry. The other half is the opportunity for more effective use of electricity. The Electric Power Researach Institute has been successful in developing electrotechnologies that reduce overall energy use and minimize pollution. Armed with consumption data available from recently invented meters and the expanding information available on the Internet, customers can take greater control over how they use electricity. An active marketplace for energy-efficient products is an important goal of California’s restructuring.

California has just begun the profound change contemplated by the authors. Although it is too early to predict the final outcome, it is not too early to declare victory over the status quo. The danger in predicting the outcome of electric industry restructuring is that we will constrain the future by lacking the vision to clearly view the possibilities.

DAVID A. ROHY

Vice Chair

California Energy Commission

Sacramento, California


Genes, patents, and ethics

Mark Sagoff provides a good overview of recent changes in the interpretation of patent law that have permitted genetically modified organisms to come to be considered “inventions” and therefore patentable subject matter (“Patented Genes: An Ethical Appraisal,” Issues, Spring 1998). He also accurately lays out the concerns of religious groups that oppose this reinterpretation on theistic moral grounds. But opposition to the patenting of life is also widespread among secular advocates of the concept of a “biological commons,” supporters of the rights of indigenous peoples to benefit from their particular modes of interaction with the natural world, and scientists and legal scholars who disagree with the rationale for the Supreme Court’s 5-4 decision in Chakrabarty, which at one stroke did away with the nonliving/living distinction in law and opened the way for eventual elimination of the novelty requirement for inventions relating to biomolecules. By not dealing with this opposition, some of which, like the religionist’s concerns, also has a moral basis, Sagoff can represent as “common ground” a formula that would give away the store (large chunks of nature, in this case) to the biotech industry in exchange for its technologists acknowledging that they do not consider themselves to be God (in Sagoff’s words, “not . . . to portray themselves as the authors of life [or] upstage the Creator”). This might be acceptable to most people on the biotech side but not to any but the most legalistic of theists. It would certainly not satisfy the secular critics of patents on life.

Since the 1980 Chakrabarty decision, U.S. law treats genetically modified organisms as “compositions of matter.” This interpretation stems from an earlier opinion by the Court of Customs and Patent Appeals that the microorganism Chakrabarty and his employer General Electric sought to patent was “more akin to inanimate chemical compositions [than to] horses and honeybees, or raspberries and roses.” Thus, a biological solecism that would have raised howls from academic scientists on the boards of all the major biotech corporations, had it been included in a high court opinion relating to the teaching of evolution, was unopposed as the basis for the law of the land when there was money to be made.

Traditions within the world’s cultures, which include but are not limited to the mainstream religions, provide ample basis for resistance to the notion that everything, including living things, is fair game for privatization and transformation into product. Such commodification would inevitably come to include humanoids-the headless clones that were recently discussed approvingly by a British scientist, as well as all manner of “useful” quasi-human outcomes of germline experimentation. The Council for Responsible Genetics, a secular public interest advocacy organization, states in a petition that has already garnered hundreds of signatures that “[t]he plants, animals and microorganisms comprising life on earth are part of the natural world into which we are all born. The conversion of these species, their molecules or parts into corporate property through patent monopolies is counter to the interests of the peoples of this country and of the world. No individual, institution, or corporation should be able to claim ownership over species or varieties of living organisms.”

By ignoring such views, which have worldwide support that has often taken the form of popular resistance to the intellectual property provisions of the biotech industry-sponsored international General Agreement on Tariffs and Trade, and instead describing the major opposition to the industry position as coming from the religious community, Sagoff winds up espousing a framework that would leave in place all but the most trivial affronts to the concept of a noncommodified nature.

STUART A. NEWMAN

Professor of Cell Biology and Anatomy

New York Medical College

Valhalla, New York


Mark Sagoff makes a valiant attempt to reconcile the divergent views of religious leaders and the biotechnology industry regarding gene patenting. Yet his analysis suffers from the same misperceptions that accompanied the original statements from the religious leaders. The foresight of our founding fathers in establishing the right to obtain and enforce a patent is arguably one of the principal factors that has resulted in the United States’ pre-eminence among all the industrial countries. Throughout the 200-plus years of our nation’s history, inventors have been celebrated for the myriad of innovative products that have affected our daily lives. In biotechnology, this has meant the development of important new medicines and vaccines as well as new crop varieties that are improving the sustainability of agriculture.

The question of “ownership” of life has been wrongly stated by the clergy. As representatives from the Patent and Trademark Office have often noted, a patent does not confer ownership; it grants the holder the exclusive right to prevent others from profiting from the invention for a period of 20 years from the time the patent was filed. Second, Sagoff alludes to patents on naturally occurring proteins. The proteins themselves are not the subject of composition-of matter-patents. What is patented is a method of purification from natural sources or through molecular cloning of DNA that will express the protein. Thus, I cannot own a protein that is produced in the human body. I can, however, have a patent on the expression of the protein or on the protein’s use in some therapeutic setting.

Sagoff’s summary of the usefulness of the patent law gives short shrift to the important feature of public disclosure. When a patent is published, it provides a description of the invention to all potential competitors, permitting them the opportunity to improve on the invention. Thus, although the original patent holder does have a period of exclusivity in which to use the invention, publication brings about new inventions based on the original idea. It is fruitless to try to protect new biotechnology inventions as trade secrets because of the large number of researchers in the industrial and academic sectors. Despite the relatively brief history of patents in the biotechnology area, there are countless examples of new inventions based on preceding patents.

Sagoff’s search for common ground leads to proposed legislation modeled on the Plant Variety Protection and Plant Patent Acts (PVPA and PPA). Passed at a time when new plant varieties could be described only by broad phenotypes, these acts were designed to provide some measure of protection for breeders of new plant varieties. Because a plant breeder can now describe the new traits in a plant variety at the molecular level, a patent can be obtained. This offers more complete protection of the invention. Consequently, the PVPA and PPA are infrequently used today.

Sagoff’s proposal is a solution in search of a problem. The case has not been made that under U.S. patent law the issuance of a patent either confers ownership of life or its creation. Changing the law to eliminate use patents for biotechnology inventions would surely cause major uncertainties in companies’ ability to commercialize new discoveries.

ALAN GOLDHAMMER

Executive Director, Technical Affairs

Biotechnology Industry Organization

Washington, D.C.


Sensible fishing

Carl Safina’s “Scorched Earth Fishing” (Issues, Spring 1998) highlights a number of critical issues regarding the conservation of marine systems and the development of management strategies for maintaining sustainable catches from marine fisheries. The present dismal state of many wild fisheries is the result of poor management in three interconnected areas: overfishing, bycatch, and habitat alteration.

A colleague and I just completed a global review of the literature related to the effects of fishing on habitat, to serve as a reference for U.S. federal fisheries managers. Measurable effects on habitat structure, benthic communities, and ecosystem processes were found to be caused by all types of mobile gear. Because little work has been done to assess the effects of fixed-gear harvesting strategies, data are not available to suggest that fixed rather than mobile gear be used. However, common sense tells us that individual units of fishing effort, if transferred from mobile to fixed gear, would reduce the areas affected. Ultimately, it is the frequency and intensity of effects that change marine systems (for example, how many tows of an otter trawl are equivalent to a scallop dredge, how many sets of a gillnet are equivalent to an otter trawl, etc.). Until we have much greater knowledge of how fishing mortality, bycatch, and habitat alteration interact to produce changes in marine ecosystems, precautionary approaches must be instituted in management. Total harvests must be constrained and the areas open to fishing reduced in size. Error must be biased on the side of conservation, not the reverse. I fully concur with the suggestion that we require no-take reserves to serve as barometers of human-caused effects, to allow representative marine communities to interact under natural conditions, and to serve as sources of fish for outside areas. Even here, we are forced to broadly estimate where and how large such no-take areas should be. For many species, we have little to no data on movement rates, sources and sinks for larvae, and habitat requirements for early benthic life stages. Only by adaptively applying precautionary approaches in all three areas of management will we develop the knowledge and wisdom to manage ecosystems for the benefit of both humans and nature.

PETER J. AUSTER

Science Director, National Undersea Research Center

Research Coordinator, Stellwagen Bank National Marine Sanctuary

University of Connecticut at Avery Point

Groton, Connecticut


I write these words while at the helm of my fishing trawler on a trip east of Cape Cod; the fishing is good and getting better as the trip progresses. Carl Safina’s article is on the chart table. Most crew members who see it shake their heads and say nothing. But his unfair condemnation of the sustainable fishing practices used by myself and most other trawlermen cuts deeply and demands a response.

The use of towed bottom-tending nets for harvesting fish from the sea floor is many hundreds of years old. The practice provides the world with the vast majority of landings of fish and shrimp. Bottom trawling is not without its environmental effects, but to simply declare it fishing gear non grata is not sensible. The many species that mobile gear catches–flounder, shrimp, ocean perch, cod, and haddock, to name a few–would virtually disappear from the shelves of stores and the tables of consumers around the world if bottom trawling were stopped.

Safina implies that fishermen could turn to a less-invasive means of catching fish, such as hooks or traps. He also knows that the use of such set or fixed devices is being restricted because it can hook or tangle mammals and birds.

What I find most disappointing about the article is that it does not take into consideration scale. It is the excessive use of high-powered fishing practices of any kind, not just mobile gear, that needs to be examined. The impact of bottom gear is acceptable when it is used in moderation and in high-energy areas that do not suffer from the disturbance that it can cause. This happens to be the vast majority of the fishable sea floor. The proof is fifty fathoms below me as I tow my net across the flat featureless plain of mud, clay, and sand that stretches for miles in every direction. For twenty years I have towed this area. Before me, thousands more towed their nets here for the gray sole, ocean dab, cod, and haddock that we still catch. Although the effects of overfishing have been dramatic, stocks are now improving as recently implemented regulations and improved enforcement are taking hold. The identification of critical habitat that must be protected has begun and will continue. But fishermen should not stop using modern sustainable fishing methods that are sound and efficient just because some scientists don’t understand how our complex gear works.

If towing a net across the seafloor is like “harvesting corn with a bulldozer,” as Safina writes, how is it that we are experiencing an increase in the populations of fish that need a healthy ecosystem to thrive? Bottom trawling in this and most sea floor communities does not lower diversity and does little permanent damage when practiced at sustainable levels, which we in New England waters are currently doing.

BILL AMARU

Member, New England Fishery Managaement Council


Manufacturing extension

In his review of the Manufacturing Extension Partnership (MEP) (“Extending Manufacturing Extension,” Issues, Spring 1998), Philip Shapira does a good job of tracing its origins, its successes, and some of the challenges the system will face in the future. There are two factors, however, that have contributed to the success of MEP and require more consideration.

The first of these is that MEP excels in helping manufacturers become more competitive. This is no accident. The vast amount of knowledge that the industry-trained field specialists have acquired in working with 30,000 companies has led to effective service delivery based on the understanding of how small companies grow. Some of the lessons learned include:

  1. Recognizing that cutting-edge technology is for the most part not the key to success for a business (much to the chagrin of federal labs and other public technology programs). Technology is obviously relevant, but before small companies can use it effectively, they need to be well managed. Technology is simply a tool used to attain a business objective, not an end in itself.
  2. Understanding that not all small firms are equal. The vast majority of small manufacturers are suppliers of parts and components to large companies. and their ability to modernize and grow is to a large extent limited by the requirements of their customers.
  3. Realizing that many small firms want to remain small, and growth is not part of their long-term objectives. Much to the surprise of many people in public life, most individuals start a business because they want to earn a good living for themselves and their families, not because they want to become the next Microsoft.

We have also come to realize that significant contributions to a local economy (in terms of higher-wage jobs) result when a small company becomes a mid-sized company. This generally requires that a company have a proprietary product that is sold directly in the marketplace, not a supplier role. As a result of this understanding of the marketplace, MEP centers and their field agents tailor their strategies for increased competitiveness and growth to the specific needs of the customer.

The second factor is the tremendous job that the National Institute of Standards and Technology has done in creating a federal MEP organization that acts in a most un-Washingtonlike manner. I don’t think that the folks who put this system together have been given the appropriate credit. MEP would not exist without a substantial federal appropriation, but it is equally important to recognize that a national system that includes all 50 states could not have been put together without tremendous leadership, planning, flexibility, and organizational skills. The MEP Gaithersburg organization not only partners with several other federal agencies to bring the most appropriate resources to manufacturers at the local level, but it is collaborating with state affiliates to create a vision and a strategic plan for the national system. Moreover, it is holding these same affiliates accountable by using results-oriented standards applied in industry.

Pretty cool for a federal agency, don’t you think ?

JACQUES KOPPEL

President

Minnesota Technology, Inc.

Minneapolis, Minnesota


Emissions trading

Byron Swift’s article (“The Least-Cost Way to Control Climate Change,” Issues, Summer 1998) on the potential uses of emissions trading to implement the Kyoto agreement cost effectively has the right diagnosis but the wrong prescription. Emissions trading certainly can reduce the overall cost of achieving environmental goals. But Swift’s fealty to the “cap and trade” emission trading model blinds him to the real issues involved in developing a workable trading system for greenhouse gases.

The potential of emission trading is becoming universally accepted. Trading is a money saver. It also provides a stimulus to technological innovation, which is the key to solving the global warming problem. Emissions trading also establishes empirically a real cost for emission reductions. This can eliminate the most nettlesome problem of environmental policymaking for the past 25 years: the perennial argument between industrialists who say an environmental program will be too expensive and environmentalists who say it is affordable. Because this kind of argument becomes untenable when prices are known, trading offers the potential to replace some of the heat with light in environmental policy debates.

Unfortunately, however, when Swift looks at how to implement such a program for global warming, he gets tangled up being an advocate for the cap and trade approach, which has been successful in addressing sulfur dioxide emissions, which cause acid rain. But acid rain is a special case, because fewer than 2,000 large, highly regulated, and easily measured sources account for more than 80 percent of all sulfur dioxide emissions in the United States. This is very different from the problem of greenhouse gases, which are emitted from millions of sources of all different sizes and characteristics.

Swift advocates a “cap and trade” system for CO2 emissions from U.S. electric power generating stations. But power generators account for only about one-third of U.S. CO2 emissions. He admits that such a system might not work for the other two-thirds of U.S. emissions. As for the 75 percent of greenhouse gas emissions produced outside the United States, he is forced to admit that “strengthening the basic institutional and judicial framework for environmental law may be necessary,” a project that he acknowledges with remarkable understatement “could take considerable investment and many years.” In the end, then, Swift’s cap and trade approach seems workable for less than one-tenth of the world’s greenhouse gas emissions.

There is also a more subtle problem with limiting the market to those players whose emissions are easily measured. Although the cap and trade system will stimulate technological innovation in the source categories included in the system, it will ignore possibilities for innovation and cost savings outside the system.

The “open trading” approach is more promising. Swift’s claim that open market trades must be verified by governments in advance is incorrect. With regard to market integrity, analogous commercial markets long ago developed many mechanisms to assure honesty in transactions. These include third party broker guarantees, insurance, and independent audits. If government involvement were desired, existing or new government or quasi-governmental agencies could be created to invest in measures that would reduce greenhouse gases and to sell the resulting greenhouse gas credits.

Swift’s article, like much of the discussion on this subject so far, puts the cart before the horse. The big political issue in creating a market system for greenhouse gases is achieving a fair, politically acceptable allocation of the rights to emit. And the big design issue is balancing the conflicting desires for the economic efficiency of the broadest possible market with verifiability. A cap and trade system modeled after the U.S. acid rain program does not satisfactorily address either of these issues in the context of global greenhouse emissions and therefore appears a poor candidate to achieve substantial reductions in greenhouse gases soon. As Swift admits, the rest of the world has neither the capacity nor the inclination to adopt such a system, and in any case the commodity emissions that are addressed by such a system represent only a small fraction of the global greenhouse emissions.

Open market trading fits the real world better than cap-and-trade systems. Of course, neither will work in the absence of government limits on greenhouse gases and a commitment to adequate enforcement. But the right trading system is a powerful tool to stimulate new and more economically efficient means of achieving greenhouse goals.

RICHARD E. AYRES

Washington, D.C.


Immigration of scientists and engineers

Since 1987, the science work force has grown at three times the rate of the general labor supply. To compound the hiring squeeze, the 1990 Immigration Reform Act resulted in a tripling of job-based visas, with scientists representing nearly one-third of the total. Immigration and the subsequent production of Ph.D.s with temporary visas, especially in the physical sciences and engineering, have clearly been a challenge to the science and technology (S&T) system of the 1990s. But to consider immigration apart from other human resource issues might solve one public policy problem but exacerbate others.

We have often discussed human resource development with our colleagues Alan Fechter and Michael S. Teitelbaum. Their “A Fresh Approach to Immigration” (Issues, Spring 1997) captures well the policy choices that have to be made. There is no federal human resource policy for S&T. The federal government, through fellowships, traineeships, and assistantships, invests in the preparation of graduate students who aspire to join the science and engineering work force. No rational planning, however, shapes the selection criteria, the form of student support, the number of degrees awarded, or the deployment of those supported. The composition of the U.S. science and engineering work force reflects a combination of agency missions, national campaigns (such as the National Defense Education Act of 1958), and wild swings in demand stratified by region of the nation, sector of the economy, and industry needs.

Add to this the changing demographics of the student population, with increasing numbers from groups historically underrepresented in S&T, and this could be a defining moment for the future vitality of U.S. research and education in science and engineering. Although women and minorities have made dramatic gains in a number of S&T fields over the past three decades, their representation as recipients of doctoral degrees in most science and engineering fields is still far below their representation in the U.S. population at large, in those graduating from high school, or in any other denominator one prefers. A policy is surely needed. Addressing immigration would be a necessary part of that policy.

Fechter and Teitelbaum suggest that a balanced panel of experts propose separate immigration ceilings for scientists and engineers based on how the ceilings are affecting our national R&D enterprise, including the attractiveness of careers in science and engineering for our citizens. We would expand the panel’s focus to consider not only the attractiveness and accessibility of such careers to U.S. citizens in general but also the extent to which the availability of the world’s talent leads us to ignore the development of our native-born talent.

The proposed panel, they say, would operate under the aegis of the White House Office of Science and Technology Policy, with input from the Department of Labor, the Immigration and Naturalization Service, and federal agencies such as NSF, NIH, and NASA. If one favored a nongovernmental host for such a panel, the National Academy of Sciences could provide a forum for discussion and advice on the full range of human resource issues. We do not fear duplication of effort; rather the contrary.

One might ask why not just take advantage of available immigrant talent rather than pursue the sometimes more painstaking work of developing talent among historically underrepresented native-born groups? We would argue that it is imperative to cultivate a science community that is representative of our citizenry. It is equally imperative to produce research that is responsive to citizen needs and, in turn, generates political support. There is a delicate balance to strike between welcoming all and depending on other nations to populate our graduate schools and future science and engineering work force..

Such dilemmas were often debated with our friend Alan Fechter. His passing robs us all of an analytical voice. With this article, he and Teitelbaum ensured that debate to inform human resource policy and practice will continue.

DARYL E. CHUBIN

SHIRLEY M. MALCOM

CATHERINE D. GADDY

Commission on Professionals in Science and Technology

Washington, D.C.

Bioethics for Everyone

Arthur Caplan is the Babe Ruth of medical ethics. He looks like the Babe—a healthy, affable, stout man, who enjoys life, and is universally liked. Like the Babe, he is prodigiously productive, with a curriculum vitae that must have more pages than many academics have publications. To most Americans, he is the best known player in the field and has done more than anyone to make bioethics a household word. Just as the Yankees built a giant stadium to house the Babe’s activities, the University of Pennsylvania established a major program to lure Caplan from the University of Minnesota. He does not wear a number on his back, but if he did, it would surely be retired when he steps down.

There are other parallels between Caplan and the Babe. Just as the Babe could do many things well (he was a record-breaking pitcher before he decided to concentrate on hitting), Caplan can write and talk to the ivory tower academic and the layman with equal ease. He is the undisputed master of the sound bite, but he is also a well-trained philosopher of science who has written finely argued analytic articles. He has also done empirical work, joining with social scientists to define the facts that are essential to making responsible policy.

Like most popularizers of complex subjects, Caplan is often criticized by experts in his field. Some of this is mere envy. Some of the criticism, however, is based on a more serious concern about the role of the ethicist—an ill-defined title—in public policy and public education. There are legitimate questions about the nature or even existence of expertise in ethics. (Caplan, in fact, has written one of the better essays on this subject.) The worst suspicion is that ethicists are little more than sophists, spinmeisters whose major expertise is in articulating clever arguments to support their personal views, which are ultimately as subjective as anyone else’s.

Although it is self-serving to say so, I think that there is more to ethics than that, particularly in the case of bioethics. For those who want to take bioethical issues seriously, whether for personal or policy reasons, there are better and worse ways of making decisions. One of the better ways is to know the relevant facts, have a full appreciation of the competing interests, have a clear understanding of opposing points of view, and be able to support one’s decision with arguments that are at least understandable to others. Caplan’s writings offer major assistance to those who share these goals.

This book is a collection of essays and articles, some previously published, most appearing for the first time, on a wide array of current controversies in ethical issues in health care, from Auschwitz to the XYY syndrome. At an average length of 11 pages, these are more than sound bites but something less than analytic philosophy. They are a useful starting point for the educated person who wants an accessible entree to thinking about questions such as: “What, if anything, is wrong with transplanting organs from animals to humans?” “What rules should govern the dissemination of useful data obtained by immoral experiments?” “Are there moral issues in fetal research that should trouble those who are liberal on abortion?”

As with most controversies in bioethics, intelligent consideration of these questions requires knowledge or skills from several disciplines, including medicine, law, analytic philosophy, and social sciences. Part of Caplan’s success as an educator is his firm grasp of all these elements and his ability to collect critical thinking from experts in those fields and weave them into a coherent essay. He also succeeds because of his extensive experience among clinicians, philosophers, lawyers, and policymakers. He is fully capable of being an academic, whether writing a finely reasoned argument in a scholarly ethics journal or joining with social scientists in explicating public attitudes on important policy issues. Although some of these essays are adapted from leading academic journals, the intended audience is not primarily scholars but the general reader, health professional, or policymaker who wants an introduction to a particular issue.

For every person who is suspicious of the glib, opinionated ethicist, there is someone else who is weary of the “two-handed” ethicist who presents all the relevant facts and arguments but scrupulously avoids judgment. Not to worry. Although Caplan is capable of presenting both sides of an argument, his own views are almost always in plain view. He is at his most enjoyable when he is most confident about the rightness of his position. In a piece about Dr. Jack Kevorkian and assisted suicide, he raises the familiar point that a major cause of Kevorkian’s success is the failure of physicians to offer adequate pain management. The point has been made by many but rarely as pungently as this: “The failure has nothing to do with a lack of general knowledge about pain control. It has to do with inadequate training, callous indifference to patient requests for relief, and culpable stupidity about addiction.”

Naturally, it is when he is most assertive (for example, “I believe that any form of compensation for cadaver organs and tissues is immoral”) that scholars will be most critical of his failure to present the issue in sufficient nuance or depth. It is impossible for anyone as prolific as Caplan to be completely consistent. One of his arguments against incentives for organ donation is that “No factual support has been advanced for the hypothesis that payment will increase cadaver donation.” Critics might point out that the policy he is most associated with—a federal law requiring that relatives be asked if they want to donate organs when it is medically appropriate—became national policy without much evidence that it would achieve the desired result. To his credit, Caplan has acknowledged the disappointing results of this policy elsewhere.

The essays vary along the spectrum from one-handed opinion to two-handed balance. Both can be effective. Sometimes it is the highly opinionated teacher who stimulates the most thinking, provoking the student to come up with counter-arguments. When Caplan is at his most opinionated, the clarity and pungency of his writing provide a useful template for organizing one’s own thinking. On the other hand, perhaps the most impressive essay in this collection is a short but lucid discussion of the increasing skepticism among some scholars regarding the definition of death. This is an extremely complicated issue, highly susceptible to opaque metaphysical discourse as well as oversimplification. Caplan does a remarkable job of explaining the issues in a balanced way, conveying the complexity, and offering a sensible political justification for the status quo.

Another reason for Caplan’s appeal is that he is not easily pigeon-holed into traditional political categories. He is sometimes libertarian, believing, for example, that competent people should have considerable latitude to use the new reproductive technologies; but there is a paternalistic element to his insistence on more regulation of infertility clinics. His views are generally centrist and reflect an attempt to find the middle ground. He argues against publication of results that were obtained from immoral experiments but supports the use of ill-gotten data that is already in the public domain, provided certain additional requirements are met, including disclosure that the information was obtained immorally. He is troubled about the spread of assisted suicide but understands that “legalization may be a good even for those who choose not to take this path. The mere fact that the opportunity for help in dying exists may help some persons to endure more disability or dysfunction than they otherwise might have been willing to face.”

An educated friend with no background in medicine, law, or ethics asked me to recommend a book that would introduce him to rational discourse in bioethics without putting him to sleep. I can recommend this book to such a novice at the same time as I find it informative and thought-provoking for someone such as myself who has spent most of his adult life thinking about these issues.

From the Hill – Summer 1998

Space station woes infuriate Congress

Cost and schedule overruns for the international space station program are increasingly exasperating members of Congress, even those who have fought long and hard to support the program. At a March hearing before the House Committee on Science Subcommittee on Space and Aeronautics, Science Committee Chairman Rep. F. James Sensenbrenner, Jr. (R-Wisc.) compared the space station to the Titanic: “The Titanic struck a single iceberg, with tragic consequences. The space station seems to be careening from one to the next, none of which has been big enough to sink the program.” Added Rep. Dana Rohrabacher (R-Calif.), the subcommittee chairman, “I don’t know how much more of the international space station program we can stand.”

Much of the concern expressed at the hearing centered on the station’s increasing cost overruns. During the FY 1998 budget authorization process, NASA notified Congress that the station program would require an additional $430 million in funding authority because of Russia’s inability to provide the station’s service module on schedule; increased costs incurred by Boeing, the prime contractor; and the need for funding to avoid “future risks and unforeseen problems.” Congress, however, approved only $230 million of the request, taking money from the space shuttle budget and increasing overall appropriations. NASA now faces the task of convincing Congress that the remaining $200 million should still be approved.

To assist NASA, the president, in his FY 1998 emergency/nonemergency supplemental appropriations request, asked Congress to provide $173 million in transfer authority to NASA. But the request was met coolly by members of the Space and Aeronautics Subcommittee, because the money would have to come out of accounts for funding space science, earth science, aeronautics, and mission support. Rep. Ralph Hall (D-Tex.) asked why appropriators should even bother funding science accounts if NASA was eventually going to transfer the money to another program.

Because of its various problems, the space station will now require an additional 18 months to complete and $4 billion in extra funding (some of which will be paid by Russia). The new overall price tag, which includes only construction costs, is about $19.6 billion, said Joseph Rothenberg, NASA’s associate administrator for space flight, at the hearing.

Members of the Senate have also told NASA that they will not tolerate any further cost and schedule problems. On March 12, the Senate Commerce, Science, and Transportation Committee passed a NASA reauthorization bill that limits space station construction spending to $21.9 billion.

Congress takes a hard look at health research priority setting

Science funding, particularly for biomedical research at the National Institutes of Health (NIH), is expected to increase significantly during the next few years. But a larger pie is still a limited pie, and research money for some diseases will increase more than money for others. With many groups vying for disease-specific funding, a continuing debate over how research priorities are set at NIH is intensifying. The 105th Congress, which has held several hearings on priority setting, has directed the Institute of Medicine (IOM) to assess the criteria and process that NIH uses to determine funding for disease research, the mechanisms for public input into the process, and the impact of statutory directives on research funding decisions. The IOM committee is expected to issue its report this summer.

The priority-setting process is complex and multitiered, possessing formal and informal components. In balancing the health needs of the nation with available scientific opportunities, criteria such as disease prevalence, number of deaths, extent of disability, and economic costs are weighed against technological developments and scientific breakthroughs. To find this balance, NIH relies on extramural scientists, professional societies, patient organizations, voluntary health associations, Congress, the administration, government agencies, and NIH staff. Accomplished investigators evaluate grant applications for merit. National advisory councils consisting of interested members of the public and the scientific and medical communities review policy. Outside experts, Congress, patient groups, the Office of Management and Budget, and other groups and agencies recommend budgetary and programmatic improvements. The final word on research programs, however, lies with the NIH director and the directors of the individual institutes.

Philip M. Smith, former executive officer of the National Research Council, has praised the current process as “pretty well right,” and the Federation of Behavioral, Psychological, and Cognitive Sciences has said that the current structure provides “many avenues of influence.” However, others are concerned that it lacks a mechanism for public input. Instead of pursuing NIH channels, many groups seeking increased research funding on specific diseases appeal directly to Congress.

Congress has the power to earmark funds for particular research areas, a process that groups such as the National Breast Cancer Coalition believe is essential for maintaining public input. But many members of Congress are not comfortable with appropriating dollars on a political rather than a scientific basis. At a March 26 hearing held by the House Commerce Committee’s Subcommittee on Health and Environment, Rep. John Porter (R-Ill.) said that if Congress consistently followed the advice of the loudest and most persistent advocacy groups, limited research dollars would be monopolized, leaving countless scientific opportunities unfunded. Porter, who chairs the appropriations subcommittee that funds NIH, recognizes the authority that Congress has to earmark but strongly opposes moving one disease ahead of another politically. “It would be a terrible mistake,” he said, agreeing with NIH officials who stress the importance of leaving research spending priorities to scientists.

Government’s role in research studied

Most economists and science policy experts agree that the federal government’s role in funding basic research is irreplaceable. However, as the R&D process has become more complex during the past half-century, the line between research that generates broad benefits and research that primarily benefits private industry has become blurred. At an April 22 hearing, the House Science Committee heard various views on the appropriate roles of government and industry in funding research, as well as appropriate mechanisms for transferring new knowledge to the private sector. The hearing was the sixth held as part of the House’s National Science Policy Study, headed by Rep. Vernon J. Ehlers (R-Mich.), which is revisiting the landmark 1945 Vannevar Bush report that established the federal government as the primary source of funds for basic scientific research. The Ehlers study was expected to be submitted to the Science Committee by the end of June.

Claude E. Barfield of the American Enterprise Institute said he estimates that one-half to two-thirds of economic growth can be attributed to technology advances and that a solid basic research effort funded largely by the federal government underpins these advances. However, he pointed out that the federal government has limited resources and oversteps its role when it supports precompetitive commercial technology development, such as the Commerce Department’s Advanced Technology Program. George Conrades of the Committee for Economic Development agreed, stating that the development and commercialization of technologies is a private sector function, except where funding serves broader government missions such as defense.

However, Conrades said that most private basic research is designed to fill gaps in broader applied research programs aimed at developing new products. Because of this commercial orientation, industry will never make sufficient investments in basic research. In 1997, of the more than $130 billion that industry spent on R&D, less than 10 percent was for basic research. And industry’s investment is only one-quarter of the total U.S. basic research effort, according to a recent report by the American Association for the Advancement of Science.

Although many members of Congress are critical of federal support for commercial projects, they recognize that states, with their more direct ties to industry, have a different role. William J. Todd, president of the Georgia Research Alliance, argued that his corporation, which was created by Georgia businesses, is one of the best examples of effective public-private partnerships. The alliance relies on the federal government to support basic research through competitively awarded grants to Georgia’s universities. This research then forms the basis of new discoveries and innovation, benefiting the government, business, and universities.

“Compromise” bill on encryption introduced

In the latest legislative attempt to deal with the controversial issue of encryption policy, Sen. John Ashcroft (R-Mo.) and Sen. Patrick Leahy (D-Vt.) introduced on May 12 what they call the E-PRIVACY Act (S. 2067). The bill would liberalize current restrictions on exports of encryption technology, but it also includes some law enforcement-friendly provisions, resulting in what its supporters say is a compromise.

The bill would allow continued access by U.S. citizens to strong encryption tools and would bar any requirement that users give a key to their data to a third party. (The administration and law enforcement agencies have insisted that access to encrypted data is essential for national security and for effectively prosecuting criminals.) It would alter current export policies by allowing license exceptions for encryption products that are already generally available, after a one-time review by the Department of Commerce.

The bill would also establish a National Electronic Technology (NET) Center within the Justice Department to help law enforcement authorities around the country share resources and information about encryption and other computer technologies. The NET Center would help officials with appropriate warrants gain access to encrypted data.

The Ashcroft-Leahy bill joins two bills that have thus far dominated the encryption policy debate in Congress. H.R. 695, introduced by Rep. Bob Goodlatte (R-Va.), and S. 909, introduced by Sen. John McCain (R-Ariz.) and Bob Kerrey (D-Neb.), would eliminate the current cap on the power and sophistication of encryption exports. Instead, they would allow the government to approve exports based on the level of sophistication generally available abroad. The bills would also prohibit the government from forcing domestic encryption users to hand over copies of their keys to the data to a centralized government-sanctioned authority.

Software producers have led the charge against export restrictions, arguing that they damage U.S. competitiveness because strong encryption products are available internationally anyway. Advocates for privacy and free speech are also aligned against the administration position, arguing that Americans are entitled to unregulated use of encrypted communication. A powerful new coalition of software businesses and online advocacy groups called Americans for Computer Privacy was launched early in March and is now spearheading a campaign to liberalize encryption controls. Scientists also have a stake in this debate, because current encryption restrictions limit the ability of computer scientists studying cryptography to publish their findings.

On March 17, the Senate Constitution, Federalism, and Property Rights Subcommittee listened to testimony about the constitutionality of encryption regulations. One of the witnesses was Cindy Cohn, lead counsel for Bernstein v. the Department of Justice, et al. For six years, Daniel Bernstein, a computer scientist, has been trying to publish an encryption program that he wrote on the Internet, a violation of current U.S. policy. Arguing that his free speech had been violated, Bernstein took his case to court. According to Cohn, a federal district court in the Northern District of California ruled that “every single one of the current (and previous) regulations of encryption software are unconstitutional.”

Cohn said that the current legislative proposals regarding encryption do not address the issues raised by the Bernstein case. H.R. 695, for instance, “does not clearly protect scientists such as Professor Bernstein but only protects those who seek to distribute mass market software already available abroad. This means that American scientists can no longer participate in the ongoing international development of this vital and important area of science.” The new E-PRIVACY bill has been criticized for the same reason. Other witnesses outlined similar concerns, noting that framers of the U.S. Constitution regularly enciphered their correspondence, using techniques that led to modern digital encryption. The sole administration witness at the hearing, Robert S. Litt of the Department of Justice, in referring to the Benstein case, argued that “a restriction on the dissemination of certain encryption products could be constitutional” even if the products are being distributed for educational or scientific purposes.

New concerns about national security

Nuclear weapons tests by India and Pakistan and the possible leakage of sensitive satellite technology to China have once again focused Congress’s attention on national security issues. Soon after India’s nuclear tests were announced in May, Senate leaders pressed for a vote to force the administration to deploy a national missile defense system as soon as it is technologically feasible. Senate conservatives have been pushing for early deployment for several years, but the administration has resisted. The proposal, however, was defeated 59 to 41.

Meanwhile, the House turned its attention to the topic of technology transfer, after reports surfaced that critical technical knowledge may have been transferred to Chinese authorities when U.S. satellite makers launched their systems on China’s Long March vehicles. Concern that China might be able to apply such knowledge to improve its own missile capabilities led the House to overwhelmingly approve a ban on any further launches of U.S. satellites by the Chinese.

House approves database bill

On May 19, the House of Representatives passed the Collections of Information Antipiracy Act (H.R. 2652), introduced by Rep. Howard Coble (R-NC). The bill would strengthen copyright protection for database publishers.

Database producers have long been calling for legislation to prevent others from electronically copying their data, repackaging it, and selling it. However, some members of the science and education communities are concerned that the Coble bill is too broad and might unduly restrict access to valuable scientific data.

Rep. George Brown (D-Calif.), ranking minority member of the House Science Committee, was the only member of Congress to speak out against the bill when it was brought to the floor. “The problem is that the bill has not found yet a proper balance between protecting original investments in databases and the economic and social cost of unduly restricting and discouraging downstream application of these databases, particularly in regard to uses for basic research or education,” Brown said.

Coble and Judiciary Committee ranking member Rep. Barney Frank (D-Mass.), however, argue that the bill fills a gap in current U.S. copyright law while still addressing the concerns of the research and education communities. “We make a distinction here in this bill between commercial use of someone else’s property and the intellectual use. If people think we have not done the balance perfectly, I would be willing to listen, but they do not want to come forward with specifics,” Frank said. Earlier in the session, the bill was amended to make employees and agents of nonprofit and educational institutions exempt from criminal liability if they violate the proposed law.

Resolving the Paradox of Environmental Protection

The next big breakthrough in environmental management is likely to be a series of small breakthroughs. Capitol Hill may be paralyzed by a substantive and political impasse, but throughout the United States, state and local governments, businesses, community groups, private associations, and the Environmental Protection Agency (EPA) itself are experimenting with new ways to achieve their goals for the environment. These experiments are diverse and largely uncoordinated, yet they illustrate a convergence of ideas from practitioners, think tanks, and academia about ways to improve environmental management.

One hallmark of the management experiments is an increased emphasis on achieving measurable environmental results. A second hallmark is a shift away from the prescriptive regulatory approaches that allowed EPA or a state to tell a company or community how to manage major sources of pollution. The experimental approaches still hold companies and communities accountable for achieving specified results but encourage them to innovate to find their own best ways to meet society’s expectations for their total operations. The experiments share a third hallmark: They encourage citizens, companies, and government agencies to learn how to make better environmental decisions over time.

EPA needs a regulatory program that is both nationally consistent and individually responsive to states, ommunities, and companies.

EPA is initiating some of those changes, as well as responding to initiatives taken by state and local governments, groups, and companies. A report published by the National Academy of Public Administration (NAPA) in September 1997, entitled Resolving the Paradox of Environmental Protection: An Agenda for Congress, EPA, and the States, identified and analyzed some of the most significant environmental initiatives under way in the United States, including EPA’s Project XL pilots, state efforts to encourage businesses to learn about and correct their environmental problems, and the implementation of the National Environmental Performance Partnership System (NEPPS) with the states. The report also focused on the challenge of developing performance indicators and an environmental information system that could support the new management approaches.

The increased emphasis on performance-based management responds to two social goals: increasing the cost-effectiveness of pollution controls and ensuring that the quality of the nation’s environment continues to improve. In the past, EPA and its state counterparts could exercise authority without much concern for the bluntness of their regulatory tools. Over time, the cost of many end-of-the-pipe pollution controls rose faster than the benefits they produced, so environmental improvement began to look too expensive. Now, however, the public expects agencies to strive for more cost-effective and less disruptive approaches.

EPA, state environmental agencies, and the regulated community need to accelerate the shift to performance-based protection, because several environmental problems are likely to become more serious and more expensive to manage in traditional ways. Chief among those problems are emissions of greenhouse gases, which may produce global climate change; polluted runoff from farms, urban streets, and lawns; the deposition of persistent organic pollutants and metals from the air into water bodies; and the destruction or degradation of critical natural habitats, including wetlands. Continued economic growth in the United States and in the developing world will also increase certain types of environmental stresses, particularly those caused by consumption of fossil energy.

EPA could not manage most of these problems through traditional means for three reasons. First, these problems arise from disparate sources that are so small and numerous that traditional end-of-the-pipe pollution controls often are neither technically feasible nor politically acceptable solutions. Second, the problems often require action by more than one EPA program, and this is difficult under EPA’s “stovepiped” statutes and organization. Third, many of the problem-causing activities are within the legal spheres of state and local governments or of federal agencies other than EPA.

One of the most serious threats to rivers, lakes, and estuaries, for example, is the nutrients flowing directly from huge new feeding operations for hogs, chickens, and turkeys, and indirectly from farm fields where animal wastes are spread as fertilizer. EPA recently proposed that it begin regulating the largest feeding operations on the same basis as factories and municipal sewage plants. This is an important step, but addressing runoff from smaller feedlots and from farm fields will require technical assistance, economic incentives, and coordinated action under agricultural and environmental statutes, as the states of Maryland and North Carolina discovered after their nutrient-rich waters spawned outbreaks of Pfiesteria, a toxic microorganism that killed fish and sickened humans.

Fortunately, many of the new approaches that will allow the nation to manage its remaining environmental problems will also help improve the cost-effectiveness of environmental protection overall.

A paradox and an imperative

EPA’s central challenge is to learn to maintain and improve a regulatory program that is both nationally consistent and individually responsive to the particular needs of each state, community, and company. That paradox can be resolved only if the agency and Congress continue to adopt performance-based tools. These include information management systems, market-based controls, compliance-assurance strategies, regulations that encourage firms to choose among compliance strategies, and new partnerships with states and businesses. Each of these approaches creates incentives for regulated parties to improve their overall environmental performance without specifying how they should do so. The tools are more flexible and more challenging than traditional command-and-control regulations, because they encourage innovation by rewarding those who find the least expensive ways to achieve public goals. Performance-based tools can either augment or replace traditional regulatory approaches. They encourage experimentation and learning, and they reward individuals, firms, and public managers who develop and use environmental and economic data. The most promising of the tools will foster an integrated approach to environmental protection, one that looks at air quality, water quality, ecosystem health, human health, and other social values as a whole.

Much has changed in the 30 years since the United States instituted national pollution-control programs. Americans have become more sophisticated about environmental problems and have supported the broad development and distribution of environmental professionals throughout federal agencies, state governments, local governments, and nongovernmental advocacy groups. Congress and EPA helped create that dispersed management capacity through their policies of delegating federal programs to the states. Indeed, the nation now relies on state and local agencies to do most of the work of writing permits, finding and prosecuting violators, and communicating with the public about environmental conditions. In addition, technological advances have made remote sensing and continuous emissions monitoring possible for certain types of factories and environmental conditions, effectively automating the role of the environmental inspector. The proper incentives could speed the further development and use of advanced monitoring technologies in coming years.

These changes make it possible for EPA and the states to expand their use of less prescriptive tools to achieve public goals. In a 1995 report, Setting Priorities, Getting Results: A New Direction for the Environmental Protection Agency, a NAPA panel stressed the importance of building more flexibility into the regulatory system to address problems more effectively and keep the costs of environmental protection from rising. The academy urged the administration to continue to develop its Common Sense Initiative, which aims to customize regulations and incentives for specific industries, and to seek legislative authorization for a program to grant firms and communities flexibility if they do more than just comply with existing requirements. The academy urged EPA to find ways to integrate its management of air pollution, water pollution, and waste management, thus allowing individual firms, communities, industrial sectors, or states the opportunity to find efficiencies by taking a holistic approach to problem solving.

EPA, state environmental agencies, and the regulated community need to accelerate the shift to performance-based protection.

EPA pursued many of the academy’s recommendations in the regulatory reinvention program it announced in the spring of 1995. The agency has not sought congressional authorization for most of these programs, however. Instead, EPA has attempted to maximize the flexibility within its statutes and to manage its interactions with the public and the regulated community more effectively.

Creating options and accountability

Three environmental innovations-EPA’s Project XL, Minnesota’s self-audit strategy, and NEPPS-illustrate how the new management approaches attempt to create new options for regulated entities while also ensuring accountability to the public.

The letters in XL are a loose acronym for environmental “excellence” and corporate “leadership,” the two qualities the project was designed to unite. As originally promoted, Project XL would allow responsible companies and communities to replace EPA’s administrative and regulatory requirements with their own alternatives. Through as many as 50 facility agreements, Project XL would help demonstrate which innovative approaches could produce superior environmental performance at lower costs.

Although few XL agreements have yet come to fruition, those that have suggest that the goals of the initiative are well founded. Individual facilities have been able to find smarter ways to reduce their environmental impact than they would have achieved by merely complying with all of the existing air, water, and waste regulations. Weyerhaeuser, for example, reached an agreement with the state of Georgia and EPA that removes a requirement that a company paper mill invest in a new piece of air pollution control equipment and adds a commitment by the company to reduce bleach-plant effluent to the Flint River by 50 percent, improve forest management practices on 300,000 acres to protect wildlife, and reduce nonpoint runoff into watersheds. The Intel Corporation reached an agreement allowing a manufacturing facility outside Phoenix, Arizona, to change its production processes without the customary prior approval, provided that the plant keeps its air pollutants below a capped level and provides a detailed, consolidated environmental report to the community every quarter. The XL agreement allows Intel to innovate more rapidly than it otherwise could, and that has considerable value in the computer industry.

Relatively few companies have followed these leads, because XL proposals have often been mired in controversy and uncertainty. EPA insists that companies demonstrate that their proposals will achieve “environmental performance that is superior to what would be achieved through compliance with current and reasonably anticipated future regulation.” That test inevitably requires a degree of judgment that cannot be quantified. In the Weyerhaeuser case, for example, there is no way to prove that the improved land management practices will offset any environmental damage caused by the company’s break on installing air pollution control equipment. Because EPA lacks clear statutory authority to make such judgments, EPA managers have been very conservative about the proposals they accept. The fear of citizen suits has inhibited companies from proposing XL projects as well.

Intel’s executives decided to take a conservative approach in their proposal, avoiding any actions that would violate state or federal environmental standards or require any waiver from enforcement agencies. They feared that even if EPA blessed an XL package and promised not to enforce the letter of the law, they would be liable to lawsuits from citizens. One reason EPA stressed the importance of stakeholder participation in the original XL proposal was to reduce the likelihood of such suits. Presumably, participants in the negotiations would conclude that the final agreements were in the public interest and thus refrain from suing. To date, none of the agreements has been challenged in court.

At the time of this writing, EPA officials continue to assert that they can make Project XL work under existing statutory authority, but the legal underpinnings of the pilot projects have changed. Rather than promising to waive enforcement, EPA now adopts site-specific rules to cover the most complex projects. That is, the agency issues a rule under existing federal statutes that applies only to one site. Before issuing a rule, EPA determines that the statutes provide a legal justification for the rule. Although it eliminates the problem of firms being held liable for “breaking” laws, EPA’s solution creates another dilemma-setting precedents and raising questions of equity. If Intel’s emissions cap meets the requirements of the Clean Air Act, then why shouldn’t identical permits be legal for other minor sources of air pollution?

If EPA had clear statutory authority to approve more dramatic experiments, firms would be more likely to propose them. Certainty is important to firms if they are to put their reputations on the line while investing in a public negotiation.

Exploiting the power of information

The value that companies place on their reputations has created other opportunities for EPA and states to experiment with new approaches to achieving environmental protection. Companies’ response to the creation of the Toxic Release Inventory (TRI) demonstrated that merely publishing information about firms’ emissions rates could lead many firms to reduce those emissions. TRI seems to have worked because firms wanted to avoid being on the high end of the list and because it forcefully brought emissions rates to the attention of executives who previously had noted only that they were in compliance with regulations.

Various federal and state programs, including one managed by the Minnesota Pollution Control Agency, have begun to use similar information-based tools. In Minnesota, companies or municipalities that discover, report, and fix environmental violations are often able to avoid the fines or penalties that might have been imposed had a state inspector found the problems. A 1995 state law authorized this approach to encourage firms and municipalities to conduct self-inspections or third-party environmental audits. (Minnesota does not grant these firms a right to evidentiary privilege or immunity as some states, including Texas and Colorado, have. EPA has pushed those states and others to rescind privilege and immunity statutes because unscrupulous firms could use them to avoid penalties for deliberate violations of environmental regulations.) Participating Minnesota companies receive a “green star” from the state. Thus, the statute provides companies with a new management option that stresses accountability over penalties. Small businesses have been exercising that option rather than face inspections by state officials. The result appears to be broader compliance among businesses that had historically operated below the state’s radar.

On a grander scale, the International Standards Organization (ISO) has developed ISO 14001 standards for corporate environmental management systems. ISO-“certified” firms and organizations maintain that the voluntary process delivers real environmental improvements, usually as a byproduct of the attention it focuses on materials use and waste management. EPA’s Environmental Leadership Program, a reinvention initiative of the Office of Enforcement and Compliance Assurance, has been encouraging firms to adopt ISO 14001 or similar environmental management systems. Some EPA officials and industry experts have speculated that ISO-certified firms might qualify for expedited permitting, looser reporting standards, or other incentives that would encourage and reward voluntary commitment to careful environmental management. However, because ISO 14001 is neither an enforceable code nor suitable for most small businesses, it is not a panacea.

These information-based tools establish incentives for improved performance while also making the public and private environmental management system better informed and thus better able to make performance-enhancing decisions. They are dynamic in ways that traditional end-of-the-pipe technology standards generally are not.

New opportunities for states

In perhaps its boldest reinvention experiment, EPA signed an agreement with the states in 1995 that created NEPPS, which attempts to establish more effective, efficient, and flexible relationships between EPA and state environmental management agencies.

Before NEPPS, the air, water, and waste division managers in EPA’s regional offices would sign individual agreements with their state counterparts spelling out how much federal money the state programs would receive and specifying requirements such as how many inspections state employees would have to conduct and how many permits they would have to issue. Throughout the 1980s and early 1990s, state commissioners grew increasingly frustrated with these agreements, because they tended to focus on bureaucratic activities rather than environmental results and because they were the vehicles EPA used to allocate its numerous categorical grants to specific activities. NEPPS has begun to structure EPA-state agreements around efforts to address specific environmental problems. State commissioners now may negotiate a single comprehensive agreement with the agency and pool much of the federal grant money that used to be categorically defined. EPA and the states are attempting to develop sets of performance measures that will keep the agencies’ attention on the environment rather than on staff activities.

After almost three years of implementation, some 40 states are participating in the new system at some level. Some states are attempting to use the process of negotiating a performance partnership agreement as a vehicle for increasing public involvement in priority setting. The New Jersey Department of Environmental Quality, for example, is investing in developing indicators of environmental conditions and trends that will provide useful information to environmental professionals and the lay public. Nevertheless, NEPPS is still in its infancy. The real test of its effectiveness will come when states, EPA, and the public must decide what to do if the core performance measures show little progress. NEPPS will work only if the states and the public are interested enough and EPA is resolute enough to insist on better performance.

Until Congress reforms itself and its systems, the promise of a fully integrated environmental program will not be met.

Meanwhile, the demands of EPA’s own enforcement office and inspector general have tended to reinforce the old ways of doing business and discourage risk taking, just as the threat of citizen suits has discouraged XL agreements. Some states are still not interested in NEPPS, perceiving it as a waste of energy as long as EPA still requires them to submit information on the old bureaucratic measures and as long as EPA holds onto its traditional oversight tools: the right to bring enforcement actions in states and to remove delegated programs from a state’s control.

If it can be successfully implemented, NEPPS will be the perfect complement to the ultimate reinvention experiment endorsed by Congress: the Government Performance and Results Act (GPRA) of 1993. GPRA requires EPA and all other federal agencies to supply Congress with a strategic plan, a set of measurable goals and objectives, and periodic reports on how well the agency is making progress toward those objectives. The NEPPS agreements could provide the foundation of such an effort.

Needed: Better data

The key to success for all the performance-based systems described above is for EPA, the states, and the public to have access to an extensive base of reliable authoritative information about environmental conditions and trends. EPA’s information systems are not yet adequate to meet that challenge.

Technological advances are beginning to make it possible for agencies to collect, manipulate, and display far richer and more extensive information about environmental conditions. It is becoming cheaper and easier to measure emissions and environmental conditions remotely as well as automatically. Increasingly, firms and states can submit reports electronically, making it possible for all environmental stakeholders to have quick and easy access to environmental information.

Even so, technology’s promise to dramatically improve decisionmakers’ access to information about environmental conditions and trends has not yet been realized. Despite large public and private investments in environmental monitoring and reporting, the nation does not have a comprehensive and credible environmental data system. That deficiency makes it difficult for decisionmakers and the public to answer basic questions about the effectiveness of environmental regulatory programs. The problem has several components: The data available to EPA are incomplete, fragmented among different program offices and their databases, and often unreliable. And there are more basic gaps in scientific understanding of environmental problems, their causes, and their consequences. EPA has struggled for years to address these information problems, and it is not yet clear that the agency or Congress has put in place a program that will soon produce objective, credible, and useful environmental statistics.

Congress must play

Taken as a whole, EPA’s reinvention initiatives are moving the nation’s environmental management system in a positive direction. To date, however, those initiatives have operated only at the margins of EPA’s core programs and will continue to be of only marginal importance unless Congress and the agency strengthen their commitment to experimentation and change. The states’ actions are broadening the base for reinvention and making many of the tools of performance-based management familiar to business managers, regulators, and the general public. As that base broadens, the impasse at the federal level will probably dissolve.

EPA’s underlying structural problems, its authorizing statutes, and the fragmentation of congressional committees with a role in environmental issues all remain barriers to effective multimedia action and performance-based management. The agency’s media offices still do the bulk of the day-to-day business and still focus on chemical-by-chemical, source-by-source regulation. State agencies, professional networks, funding channels, advocacy groups, and congressional committees have replicated that structure, creating enormous structural inertia. One product of that inertia is inefficiency. Even if every one of EPA’s regulations made perfect sense by itself, they could not add up to the ideal environmental management regime for different kinds of facilities operating in different geographical settings with different population densities and weather conditions. The nation’s physical, economic, and political conditions are too varied for the old regulatory approaches to fit well across the nation. A focus on performance will improve the application of those approaches, but ultimately EPA needs a more effective way to address problems and facilities holistically, as Project XL is striving to do. Every EPA administrator has struggled with those problems. Eventually, Congress will need to help resolve them.

EPA has not acted on two of the major recommendations in Setting Priorities, Getting Results: producing a comprehensive reorganization plan to break down the walls between the media offices and developing a comprehensive integrating statute for congressional action. One reason for the lack of progress has been the fierce party partisanship on Capitol Hill. Although it is not clear when the political climate will be more conducive to progress on such a difficult task, the academy’s recommendations for changes will remain relevant and important. To better integrate policymaking across program lines, EPA should study the effects that reorganization has had on its regional offices and the states they serve as well as the reorganizations that several state environmental agencies have undertaken. The New Jersey Department of Environmental Protection, for example, has integrated its permitting systems, which may suggest lessons for EPA.

Another of the most politically challenging recommendations in the 1995 report remains untouched: Congress has not consolidated its committees that have roles in environmental oversight. That continued fragmentation of responsibility in Congress takes its toll on EPA-and on the environment itself-by reinforcing fragmented approaches in the agency. Until Congress reforms itself and its systems, the promise of a fully integrated environmental program will not be met.

EPA has tried numerous strategies in the past few years to overcome some of the challenges created by its patchwork of authorizing statutes. Significant progress, however, will require statutory reform. By beginning a gradual legislative process to integrate EPA’s authorities, Congress would encourage EPA to seek the most efficient ways possible to improve the nation’s environment. It is important to restate the obvious: The nation’s environmental statutes, and EPA’s implementation of them in partnership with the states, have accomplished great environmental gains that benefit all Americans and strengthen the nation’s future. It is also obvious that the nation needs to do more to improve the quality of the environment-domestically and globally-and to find better ways to do that work.

Congress should lead that change by working with EPA to develop an integrating statute-a bill that would leave existing statutes essentially intact while beginning a process to harmonize their inconsistencies and encourage integrated environmental management. The integrating statute should be more modest, less threatening, and hence more pragmatic than a truly unified statute. The bill should accomplish the following five objectives:

  1. Congress should articulate its broad expectations for EPA in the form of a mission statement.
  2. Congress should direct EPA to integrate its statutory and regulatory requirements for environmental reporting, monitoring, and record keeping. This effort should eliminate redundant or unnecessary reporting requirements, fill reporting or monitoring gaps where they exist, and establish consistent data standards. This would make the information more useful to public and private managers, regulators, and the public.
  3. Congress should direct EPA to conduct a series of pilot projects to fully test the ideas that inspired Project XL. The statute should authorize EPA to use considerable discretion to develop model projects for multimedia regulation, pollution reduction, inspections, enforcement, and third-party certification of environmental management systems. The goals of such pilot projects should be to develop the most productive ways to achieve environmental improvements on a large scale. Thus, some of the pilots might test the potential for future multimedia regulation of specific sectors, or opportunities for interrelated businesses and communities to achieve their environmental and social goals through totally unconventional means requiring more freedom to innovate than the statutes currently permit.
  4. The statute should affirm that Congress authorizes and encourages EPA to use market-based mechanisms such as trading systems to address pollutants, including nonpoint pollutants, when the agency believes they would be appropriate.
  5. Congress should direct EPA to support a series of independent evaluations of the pilot projects and other activities that it authorizes under the statute. EPA should also provide biennial reports to Congress that include analysis of its accomplishments and barriers to accomplishment, as well as recommendations for congressional action.

Adopting such a statute would have substantive and symbolic value. Substantively, the statute would authorize changes that should enhance the nation’s ability to make new environmental improvements at the lowest possible cost. By authorizing experiments in multimedia management, for example, the statute should encourage innovations that would reduce nonpoint pollution or ecosystem damage. Symbolically, the statute would settle the debate within EPA and the regulated community about whether integrated performance-based protection is important, appropriate, or legal.

In the months that have elapsed since NAPA published its report, it has become clear that the passage of environmental legislation of almost any kind is highly unlikely within the next year or two. Two bills have sparked some interest, though neither is gaining much momentum. A bill sponsored by Sen. Joseph Lieberman (D-Conn.) would authorize XL-type projects. Though it resisted any such legislation for a time, EPA is giving it some support. However, detailed procedural requirements in the bill leave business unenthusiastic while failing to overcome the skepticism of environmental advocates who have resisted XL from the start. The “Regulatory Improvement Act of 1997,” also known as the Thompson-Levin Bill after its sponsors, Sens. Fred Thompson (R-Tenn.) and Carl Levin (D-Mich.), would require federal agencies, including EPA, to conduct regulatory analyses, including a cost-benefit analysis of major regulations when issuing them. Although it boasts bipartisan support, the bill appears to be mired in the stalemate that emerged around risk assessment and cost-benefit analysis in the 104th Congress in 1995.

Nevertheless, the concepts sketched out here are becoming widely accepted in the states and among pragmatic policy advocates. If Congress continues to take GPRA seriously and if EPA and the states continue to take NEPPS seriously, there will be a demand for more and better indicators of environmental performance and trends. That in turn should help government agencies adopt the most effective tools for managing environmental problems.

Making protection automatic

The 18th-century economist Adam Smith showed how the “invisible hand” of free markets could foster innovation, competitive pricing, and economic growth. Two hundred years later, Garrett Hardin showed how the invisible hand could also produce the “tragedy of the commons”-the depletion of shared resources in the absence of a collective decision to manage them for the public good. Paradoxically, a combination of market forces and public actions can help the nation achieve its environmental goals. The United States needs to keep making collective decisions to protect and restore the environment for the public good and the well-being of future generations. To the maximum extent possible, however, the nation should attempt to employ invisible hands-the creative energy of millions of decisionmakers pursuing their self-interest-to achieve the nation’s environmental goals.

EPA, states, localities, and the regulated community need to develop more comprehensive, comprehensible, and useful measures of environmental conditions and trends. The increase in public understanding of the environment and environmental risks over the past four decades has motivated the incredible progress the nation has made in reducing pollution levels and restoring environments. But the public will need a deeper understanding if it is to make the increasingly sophisticated judgments needed for continued improvements at reasonable costs. EPA’s efforts to develop performance-based management tools will help the public participate more fully in managing the environment. Credible information about environmental performance, public policies that harness market forces, and public pressure-the expectation of a private commitment to the public welfare-may ultimately be enough to keep most businesses and communities operating on a track of continuous environmental improvement.

Love Canal revisited

From August 1978 to May 1980, the nondescript industrial city of Niagara Falls, New York, named for one of the world’s great scenic wonders, acquired a perverse new identity as the site of one of the 20th century’s most highly publicized environmental disasters: Love Canal. It was the first, and in many ways the worst, example of a scenario that soon reproduced itself in many parts of the country. Toxic chemicals had leaked from an abandoned canal used as a waste dump into nearby lots and homes, whose residents seemed more than averagely afflicted by a wide variety of health problems, from miscarriages and birth defects to neurological and psychological disorders. State and federal officials tried desperately to assess the seriousness of the danger to public health, hampered by a lack of reliable scientific data and inadequately tested study protocols. Controversies soon erupted, and panicky homeowners turned to politicians, the news media, and the courts for answers that science seemed unable to deliver. The crisis ended with a combined state and federal buyout of hundreds of homes within a several-block radius of the canal and the relocation of residents at an estimated cost of $300 million.

The name Love Canal has entered the lexicon of modern environmentalism as a virtual synonym for chemical pollution caused by negligent waste management. The episode left a lasting imprint on U.S. policy in the form of the 1980 federal Superfund law, which mandated hugely expensive cleanups of hazardous waste sites around the nation. Community-based environmental activism also took root at Love Canal, following the model pioneered by the local homeowners’ association and its charismatic leader Lois Gibbs. A question left tantalizingly in the air was whether, in times of heightened public anxiety, it is possible for public health officials to undertake credible scientific inquiry, let alone whether such inquiry has the power to inform policy decisions. Much has been written on this subject, including Adeline Levine’s Love Canal: Science, Politics, and People (Lexington Books, 1982), an early sociological account of the controversy.

Why, then, do we need another book about Love Canal now, 20 years after the event burst on our national consciousness? Allan Mazur, a policy analyst at Syracuse University, answers by drawing an analogy between his book and Rashomon, the classic film by Akira Kurosawa, which has come to symbolize the irreducible ambiguity of human perceptions and relationships. In the film, the story of a rape and murder is retold four times from the viewpoints of the four principal characters: a samurai, his wife, a bandit, and a passing woodcutter. The story of Love Canal, Mazur argues, involved similar discrepancies of vision, so that what you saw depended on where you stood in the controversy. But whereas the artist Kurosawa was content to leave ambiguity unresolved, the analyst Mazur is determined to reconcile his conflicting accounts so as to offer readers something akin to objective truth. No unbiased reading was possible, he implies, as long as the principals in the Love Canal drama were propagating their interest-driven accounts of what had happened and who was to blame. Now, at a remove of 20 years, Mazur is confident that his disinterested academic’s eye, liberated from “strong favoritism,” will allow us to glimpse a reality that could not previously be seen.

In pursuing the truth, Mazur imitates Kurosawa’s narrative strategy, but the resemblance turns out to be skin-deep. The book begins with six accounts of the events from 1978 to 1980, representing the viewpoints of the Hooker Chemical Company (the polluter); the Niagara Falls School Board (negligent purchaser or Hooker’s innocent dupe); two groups of homeowners who were compensated and relocated on different dates; the New York State Department of Health; and Michael Brown, the hometown reporter who broke the story and later wrote a bestseller about it. In the second part, Mazur, like Kurosawa’s woodcutter, emerges from behind the scenes to give us his rendition of the events. But whereas the woodcutter was just one more voice in Rashomon, Mazur claims something closer to 20/20 hindsight. Evaluating in turn the news coverage, the financial settlements, and the scientific evidence, he even-handedly declares that there is enough blame to go around among all the parties involved. His impatience with Lois Gibbs, however, is palpable, and he holds the news media responsible for succumbing too easily to her story, which he finds least credible despite its later canonical status.

Lost in a time warp

At 218 pages plus a brief appendix, Mazur’s version of the Love Canal story is refreshingly brief. This is all to the good, because it is hard to read this book without feeling that one is caught in a time warp. The analytic resources that the book deploys seem almost as dated as the events themselves. For instance, the list of references, drawing heavily on the author’s own prior work, shows very little awareness of the fact that scientific controversies have emerged over the past 20 years as a major focal point for research in science and technology studies and in work on risk. In Mazur’s world, therefore, all is still as innocently black and white as it seemed to be in the 1960s: Either a chemical has caused a disease or it has not; either experts are doing good science or they are not; either people are unbiased or they are interested.

In such a world, disagreements occur because people with interests distort or manipulate the facts to suit their convenience. Reason, common sense, and good science would ordinarily carry the day were it not for political activists such as Lois Gibbs who muddy the waters with their “gratuitous generation of fear and venomous refusal to communicate civilly.” Powerful news organizations are unduly swayed by “articulate and sympathetic private citizens, often photogenic homemakers, who are fearful about contamination that threatens or has damaged their families.”

There are hints here and there that the author is aware of greater complexity beneath the surface, but his failure to acknowledge nearly two decades of social science research prevents him from achieving a deeper understanding. In his commitment to some idealized vision of “good science,” for example, Mazur loses sight of the fact that standards for judging science are often in flux and may be contested even within the scientific community. Against the evidence of a mass of work on the history, philosophy, and sociology of science, he asserts that there are clear and unambiguous standards of goodness governing such issues as the use of controls, the design of population studies, the conduct of surveys, and the statistical interpretation of results. Not surprisingly, Mazur concludes that the homeowners’ most effective scientific ally, Dr. Beverly Paigen, failed to meet the applicable standards. The data-collecting efforts of nonexpert “homemakers,” such as Gibbs, are dismissed with even less ceremony.

None of this is very helpful in explaining the profoundly unsettling questions about trust and credibility that Love Canal helped bring to the forefront of public awareness. A firm grasp of constructivist ideas about knowledge creation would have helped, but Mazur evidently knows only a straw-man version of social construction that strips it of any analytic utility. Instead of using constructivism as a tool for understanding how knowledge and belief systems attain robustness, Mazur dismisses this analytic approach as mindlessly relativistic. There is an unlovely smugness in his assertion that constructivists would “take it for granted that the Indians’ account [of the Battle of Little Big Horn] is no more or less valid than the army’s account.” He tilts again at imaginary windmills a page later, writing that “Few things can be proved absolutely to everyone’s satisfaction. There is a possibility that we are all figments of a butterfly’s wing; I can’t disprove it, but I don’t care.”

One could read Mazur’s accounts of the parties’ positions in the Love Canal debate as an attempt at social history, but here again one would be disappointed. The presentation draws largely on a limited number of sources, usually in the form of first-person narratives or interviews; and even these are not always adequately referenced, as in the case of the 1978 source from which most of Lois Gibbs’s story is drawn. The effort to provide multiple perspectives on the same events often leads to unnecessary, almost verbatim repetition, as with a statement by Health Commissioner David Axelrod that is quoted on p. 98 and again on p. 169. The book will remain a useful (though not perhaps totally reliable) compendium of things people said during the controversy. There are occasional wonderful touches, as when Gibbs describes the homeowners’ appropriation of expert status at a 1979 meeting with Axelrod. All the residents who attended “wore blue ribbons symbolic of Axelrod’s secret expert panel”; Paigen’s said that she was an expert on “useless housewives’ data.” These are the ingredients with which a gripping history may someday be fashioned by a storyteller with a different agenda.

The book’s inspiration, one has to conclude, is ultimately more forensic than academic. Unlike Kurosawa’s all-too-human actors, Mazur’s institutional participants have the character of parties to a staged lawsuit, offering their briefs to the court of reconciled accounts. Mazur himself seems to relish the role of judge, able to cast a cold eye on others’ heated accounts and to sort fact from fancy. But common-law courts have always been reluctant to do their fact-finding on the basis of records that have grown too old. People forget, move away, or die, as indeed did happen in the case of David Axelrod, a remarkable public servant whom Mazur aptly characterizes as the tragic hero of Love Canal. Documents disappear. New narratives intervene, adding confusion to an already-cacophonous story. In a court of law, a rejudging of responsibility for Love Canal would have been barred by a statute of limitations. History, to be sure, admits no such restriction, but Mazur, alas, is no historian.

Finally, it is interesting to observe that recent policymaking bodies have been, if anything, more charitable toward citizen perceptions and participation than the author of this book. In 1997, for example, the Presidential/Congressional Commission on Risk Assessment and Risk Management recommended that risk decisions should engage stakeholders at every stage of the proceedings. Similar recommendations have come from committees of the National Research Council. Impersonal policymaking bodies, it appears, can learn from experience. Is it unreasonable to expect more from academic social scientists, who have, after all, more leisure to reflect on what gives human lives meaning?

Computers Can Accelerate Productivity Growth

Conventional wisdom argues that rapid change in information technology over the past 20 years represents a paradigm shift, one perhaps as important as that caused by the electric dynamo near the turn of the century. The world market for information technology grew at nearly twice the rate of world gross domestic product (GDP) between 1987 and 1994, so the computer revolution is clearly a global phenomenon.

Yet measured productivity growth has been sluggish in the midst of this worldwide technology boom. In the United States, for example, annual labor productivity growth (defined as output per hour of work) actually fell, from 3.4 percent between 1948 and 1973 to 1.2 percent between 1979 and 1997. For the period 1979 to 1994, total factor productivity (TFP) growth (defined as output per unit of all production inputs) also fell substantially, from 2.2 percent to 0.3 percent per year. In light of the belief that computers have fundamentally improved the production process, this is particularly puzzling. As Nobel laureate Robert M. Solow has observed, “You can see the computer age everywhere but in the productivity statistics.”

Economists have long argued that both output and productivity are poorly measured in service sectors.

Detailed analysis of the U.S. economy suggests that computers have had an impact, but it is necessary to look beyond the economy-wide numbers in order to find it. New technology affects each business sector differently. For most sectors, the computer revolution is mainly a story of substitution. Companies respond to the declining price of computers by investing in them rather than in more expensive inputs such as labor, materials, or other forms of capital. The eight sectors that use computers most intensively, for example, added computers at a rate of nearly 20 percent per year from 1973 to 1991, whereas labor hours grew less than 3 percent per year. This capital deepening (defined as providing employees with more capital to work with) dramatically increased the relative labor productivity of the computer-using sectors, those with more than 4 percent of total capital input in the form of computers in 1991.

Before 1973, labor productivity in the manufacturing sectors that invested heavily in computers grew only 2.8 percent per year, compared with 3.1 percent for those that did not. After they accumulated computers so rapidly in the 1970s and 1980s, however, labor productivity growth jumped to 5.7 percent per year for 1990-1996 in the computer-using sectors in manufacturing, but declined to 2.6 percent per year for the other manufacturing sectors. Comparison of the relative performance of these sectors over time shows that computers are playing an important role in determining labor productivity.

Computer-related productivity gains in the manufacturing sectors also suggest that measurement errors have been a large obstacle to understanding the economy-wide impact of computers on productivity. Computer investment is highly concentrated in service sectors, but in those sectors there is no clear evidence of the dramatic productivity gains found in manufacturing. Economists, however, have long argued that both output and productivity are poorly measured in service sectors. If one conjectures that the true impact of computers is approximately the same in both manufacturing and services, these results imply an increasing understatement of output and productivity growth in the service sectors.

The computer-producing sector reveals yet another way in which the computer revolution affects economy-wide productivity growth. This sector experienced extraordinary TFP growth of nearly 3 percent per year in the 1980s, reflecting the enormous technological progress that enabled computer companies to churn out superior computers at lower and lower prices. This one sector, despite being relatively small in terms of private GDP (less than 3 percent), was responsible for one-third of TFP growth for the entire U.S. economy in the 1980s.

Moving beyond aggregate data

Computers have experienced dramatic price declines and extraordinary investment growth in the past two decades. The price of computer investment in the United States decreased at the remarkable rate of more than 17 percent per year between 1975 and 1996, whereas the broader investment category of which computers are a part, producers’ durable equipment (PDE), increased more than 2 percent per year. At the same time, and mostly in response to rapid price declines, business undertook a massive investment in computers. Starting near zero in 1975, the computer share in real PDE investment in the United States increased to more than 27 percent by 1996. With cumulative investment in new computer equipment near $500 billion for the 1990s, U.S. companies have clearly embraced the computer. Countries across the globe are also rapidly accumulating computers. Between 1987 and 1994, growth in the information technology market exceeded GDP growth in 21 out of the 24 member countries of the Organization for Economic Cooperation and Development (OECD). These figures present a compelling view of the depth and breadth of the computer revolution. From Main Street to Wall Street, computers appear everywhere, and computer chips themselves can also be found inside automobiles, telephones, and television sets.

Yet aggregate productivity growth remains flat by historic standards. And services-which are the most computer-intensive sectors–show the slowest productivity growth. This apparent inconsistency is at the heart of the computer productivity paradox.

Any attempt to explore this paradox, however, must move beyond the economy-wide data on which it is based. The aggregate data hide many illuminating details. For most companies, computers are a production input they invest in, just like new assembly lines, buildings, or employee training. Not all companies use computers the same way, however. Nor can all companies benefit from computer investment. These important differences are lost in the economy-wide data. Furthermore, computers are also an output from a particular manufacturing sector.

To explore these differences, the U.S. economy was divided into 34 private sectors and ranked according to their use of computers. Eight of these sectors use computers intensively–more than 4 percent of their capital was in the form of computers in 1991–and were labeled computer-using sectors. As shown in Table 1, these eight sectors accounted for 63 percent of total value added and 88 percent of all computer capital input in 1991.

Computers are highly concentrated within three service sectors-trade; finance, insurance, and real estate (FIRE ) and “other services,” which includes business and personal services such as software, health care, and legal services–that account for more than 75 percent of all computer inputs. In manufacturing, only five of 21 sectors used computers intensively enough to be labeled computer-using; they accounted for less than 40 percent of total manufacturing output in 1991.

Computers are not everywhere

This wide variation in computer use is evident in recent surveys of adoption of computer-based technologies. For example, in 1993 a staggering 25 percent of all manufacturing plants surveyed by the U.S. Census Bureau used none of 17 advanced technologies. Moreover, patterns of adoption varied greatly by industry and technology. In fact, very few of the surveyed technologies showed use rates greater than 50 percent, and many (particularly lasers, robots, and automated material sensors, all of which depend on computers) were used by fewer than 10 percent of surveyed plants. The most prevalent technologies are computer-aided design and numerically controlled machine systems. Virtually identical surveys in Canada and Australia confirm the diversity reported by U.S. manufacturers.

OECD surveys also show that computers are highly concentrated in specific sectors. In Canada, France, Japan, and the United Kingdom, for example, information and communication equipment is steadily increasing its share of total investment and is much more highly concentrated in the service sectors. In 1993, OECD estimates indicate that the service sector contained nearly 50 percent of all embodied information technology for the seven major industrial nations and that this capital was concentrated primarily in finance, insurance, services, and trade.

More specific data from France and Germany suggest that computers are becoming universal in some industries. Nearly 90 percent of all workers in the French bank and insurance industry used a personal computer or computer terminal in 1993. This proportion is up from 69 percent in 1987 and substantially exceeds the 30 to 40 percent in French manufacturing industries. In Germany, nearly 90 percent of surveyed companies in the service sector report that computers are important in their innovation activities.

Although computers may appear to be everywhere, they are actually highly concentrated in the service sectors and in only a few manufacturing sectors.

When the price of an input falls, companies respond by substituting the cheaper input for more expensive ones. With the enormous price declines in computers, one would expect to see companies substitute less expensive computers for relatively expensive labor and other inputs. For example, companies might replace labor-intensive written records with computer-intensive electronic records. Detailed analysis of the U.S. sectoral data suggests that this is exactly what happened. The eight computer-using sectors invested in computers rapidly and substituted them for other inputs. From 1973 to 1991, these eight sectors report annual growth in real computer input in excess of 17 percent, with seven out of eight above 20 percent (see Table 1).

When compared with the growth rates of labor and output in these sectors, the swift accumulation of computers appears even more striking. In contrast to the phenomenal growth rates of computer capital, labor hours declined in three sectors and experienced growth rates above 3 percent in only two. Similarly, output growth ranged from -0.4 percent to 4.7 percent per year. Moreover, substituting computers for other inputs is not limited to these computer-intensive sectors; the phenomenon is observed in virtually every sector of the U.S. economy.

Several independent company-level studies from the United Kingdom, Japan, and France also suggest that an important part of the computer revolution is substitution of inputs. The French study, for example, found a strong positive relationship between the proportion of computer-users and output per hour. A survey of Japanese manufacturing and distribution companies finds that information networks complement white-collar jobs but substitute for blue-collar jobs.

Rather than looking at empirical relationships between computers, productivity, and employment patterns, the Australian Bureau of Statistics used a more subjective, although still informative, approach. In a 1991 survey of manufacturing companies, nearly 50 percent rated lower labor costs as a “very important” reason for introducing new technology. A 1994 follow-up study found that almost 25 percent of the companies cited reducing labor costs as a “very significant” or “crucial” objective in technological innovation. These survey results offer still more evidence that companies expect high-tech capital to substitute for other production inputs.

Measuring productivity

These results suggest that a large part of the computer revolution entails substitution of one production input (computers) for others (labor and other types of capital). But is this just wheel-spinning? The answer depends on how productivity is defined and measured and what one means by wheel-spinning. Economists use two distinct concepts of productivity: average labor productivity (ALP) and total factor productivity (TFP). Although these concepts are related, they cannot be used interchangeably, and TFP is the productivity measure most favored by economists when analyzing the production process.

ALP is defined simply as output per hour worked. A major advantage of this measure is computational; both output and labor input statistics are relatively easy to obtain. Since the 1930s, however, economists have recognized that labor is only one of many production inputs and that labor’s access to other inputs, especially physical capital, is a key determinant of ALP. That is, when their labor is augmented by more machines and better equipment, workers can produce more. This increase in output need not reflect harder work or improved efficiency but is simply due to increases in the complementary inputs available to the labor force.

This key insight led to the concept of TFP, defined as output per unit of total inputs. Rather than calculating output per unit of labor as in ALP, TFP compares output to a composite index of all inputs (labor, physical capital, land, energy, intermediate materials, and purchased services, augmented with quality improvements), where different inputs are weighted by their relative cost shares.

Increased TFP has often been interpreted as technological progress, but it more accurately reflects all factors that generate additional output from the same inputs. New technology is a key source of TFP growth, but so are economies of scale, managerial skill, and changes in the organization of production. Furthermore, technological progress can be embodied, at least in part, in new investment.

ALP and TFP are fundamentally different concepts although TFP is an important determinant of ALP. ALP grows–that is, each worker can produce more–if workers have more or better machinery to work with (capital deepening), if workers become more skilled (labor quality), or if the entire production process improves (TFP growth).

Despite the connection between these two concepts, the trend toward greater use of computers implies different things for each measure of productivity. If investment in computer capital is primarily for input substitution, then ALP should increase as labor is supported by more capital. TFP, however, will not be affected directly; it will increase only if computers increase output more than through their direct impact as a capital input. It is this more-than-proportional increase that many analysts have in mind when they argue that increased investment in computers should result in higher productivity.

It is easier to define these productivity statistics than to measure and apply them. There is a growing consensus among economists that both output growth and productivity growth are poorly measured, especially in the fast-growing service sectors with a high concentration of computers. This measurement problem is part of a more fundamental issue concerning output growth and quality change. Most economists agree that quality improvements are an important form of output growth that need to be measured. The U.S. Bureau of Economic Analysis (BEA) officially measures the enormous quality change in computer equipment as output growth. Based on joint work with IBM, BEA now uses sophisticated statistical techniques to create “constant-quality price indexes” that track the price of relevant characteristics (such as processor speed and memory). These price indexes allow BEA to measure the production of real computing power and count that as output growth. Thus, the quality-adjusted price of computer equipment has fallen at extraordinary rates, while real computer investment has rapidly grown as a share of total investment in business equipment.

For other sectors of the economy, however, output is harder to define and measure. In the FIRE sector, for example, BEA extrapolates official output growth for banks based on employment growth so that labor productivity is constant by definition. Yet most would argue that innovations such as ATMs and online banking have increased the quality of bank services. Because difficulties of this type are concentrated in all service sectors, output and productivity estimates in those sectors must be interpreted with caution.

Labor productivity growth

In the early 1970s, the industrial world experienced a major growth slowdown in terms of aggregate output, ALP, and TFP. Economists have offered many possible reasons for this slowdown–the breakdown of the Bretton Woods currency arrangements, the energy crisis, an increase in regulation, a return to normalcy after the unique period of the 1950s and 1960s, and an increase in the share of unmeasured output–but a clear consensus has not yet emerged. Because the computer revolution began in the midst of this global slowdown, untangling the relationship between computers and productivity growth is particularly difficult. For example, does the drop in U.S. ALP growth from 3.4 percent (1948 to 1973) to 1.2 percent (1973 to 1996) mean that computers lowered ALP growth? Or would the slowdown have been much worse had the computer revolution never taken place? Without the proper counterfactual comparison–what productivity growth would have been without computers–it is difficult to identify the true impact of computers.

Our approach to that problem is to compare ALP growth in the computer-using sectors with the non-users in the 34-sector database before and after the slowdown period. Chart 1 compares growth rates of average labor productivity for five computer-using sectors in manufacturing and 16 other manufacturing sectors for 1960 to 1996. For the early period of 1960 to 1973, labor productivity growth was roughly the same for the two groups-2.8 percent per year for computer-using sectors and 3.1 percent per year for non-computer-using sectors. Both groups then suffered during the much-publicized productivity slowdown in the 1970s as ALP growth rates fell to about 1.5 percent per year during the period 1973 to 1979.

As the computer continued to evolve and proliferate in the 1980s, businesses adapted and their production processes changed. Personal computers-first classified as a separate investment good in 1982-became the dominant form of computer investment, and ALP growth accelerated in the computer-using sectors in manufacturing. Between 1990 and 1996, these sectors posted strong ALP growth of 5.7 percent per year, whereas other manufacturing sectors managed only 2.6 percent per year. Because ALP growth for computer-using sectors prior to the 1970s was lower than in other manufacturing sectors, this analysis strongly suggests that computers are having an important impact on labor productivity growth in U.S. manufacturing.

The same comparison for nonmanufacturing sectors yields quite different results, with no obvious ALP gains for computer-using sectors outside of manufacturing (Chart 2). Rather, the 3 computer-using sectors and the 10 non-computer-using sectors show healthy productivity growth prior to 1973, but sluggish productivity growth thereafter: Labor productivity grew only 0.9 percent per year for computer-using sectors in nonmanufacturing between 1990 and 1996, and 0.8 percent for other nonmanufacturing sectors.

The sharp contrast in productivity growth in computer-using sectors in manufacturing and in services highlights the difficulties associated with productivity measurement. Economists have long argued that output and productivity growth are understated in the service sectors due to the intangible nature of services, unmeasured quality change, and poor data. These results support that conjecture and further imply that measurement problems are becoming more severe in the computer-intensive service sectors. This suggests that much of what computers do in the service sectors is not being captured in the official productivity numbers.

Although measurement errors probably understate output and productivity growth in the computer-intensive service sectors, this does not change the finding of significant input substitution. In the trade and FIRE sectors, for example, the growth of labor slowed while computer inputs increased more than 20 percent per year from 1973 to 1991. Because capital and labor inputs are measured independently of service-sector output, this type of primary input substitution is not subject to the same downward bias as is TFP growth. Whatever the true rates of output and TFP growth, these service sectors are clearly substituting cheap computers for more expensive inputs.

Variation in growth

Estimates of TFP growth for each of the 34 sectors demonstrate no relationship between it and the growth of computer use. TFP grew in some sectors, fell in others, and stayed about the same in others, but there was no obvious pattern relating TFP growth to computer use. Nor was there any relationship evident for just the eight computer-using sectors (see Table 1). These findings suggest that, in contrast to increases in ALP, there have been few TFP gains from the widespread adoption of computers.

Many consider this disappointing. Learning lags, adjustment costs, and measurement error have been suggested as reasons for a slow impact of computers on TFP growth. It is important to remember, however, that this finding is entirely consistent with the evidence on input substitution. If computer users are simply substituting one production input for another, then this reflects capital deepening, not TFP growth. Recall that TFP grows only if workers produce more output from the same inputs. If investment in new computers allows the production of entirely new types of output (for example, complex derivatives in the financial services industry), the new products are directly attributable to the new computer inputs, not to TFP growth.

This conclusion partly reflects BEA’s explicit adjustment for the improved quality of computers and other inputs, but most economists agree that quality change is an important component of capital accumulation. That is, when computer investment is deflated with BEA’s official constant-quality price deflator, the enormous improvement in the performance of computers is then folded into the estimates of computer capital. Quality improvements are effectively measured as more capital, so capital becomes a more important source of growth, and the TFP residual accounts for a smaller proportion of output growth.

So far this analysis has focused on the role of computers as an input to the production process. But computers are also an output; companies produce computers and sell them as investment and intermediate goods to other sectors and as consumption and export goods. Because the observed input substitution in computer-using sectors is driven by rapid price declines for computer equipment, it is important to examine the production of computers themselves and investigate the source of that price decline.

The data show that TFP is the primary source of growth for the computer-producing sector and a major contributor to the modest TFP revival in the U.S. economy, particularly in manufacturing. From 1979 to 1991, virtually the entire growth in output in the computer-producing sector is attributable to TFP growth; that is, output grew much faster than inputs and caused a large TFP residual. In fact, output grew 2.3 percent per year even though labor, energy, and material inputs actually declined. The computer-producing sector is itself also an important user of computers; nearly 40 percent of the growth in output attributable to capital services comes from computer capital over this same period.

Rapid growth in TFP in the computer-producing sector contrasts with sluggish TFP growth in the entire U.S. private-business economy, which fell from more than 1.6 percent per year before 1973 to -0.3 percent for the period from 1973 to 1979. Even the computer-producing sector showed negative TFP growth in that period. After 1979, however, the story is very different. While annual TFP growth for the 35 sectors rebounded mildly to 0.3 percent per year, TFP growth in the computer-producing sector jumped to 2.2 percent for 1979 to 1991.

The aggregate economy consists, by definition, of its sector components. How much of economy-wide TFP growth reflects TFP growth from the computer-producing sector? In the 1980s, it was as much as one-third of total TFP growth. In the 1990s, TFP growth in the sector remained high, but because there were increases in TFP growth in other manufacturing sectors, it accounted for less of the total, about 20 percent between 1991 and 1994.

Recent estimates of TFP growth for manufacturing industries confirm these trends. Of the 20 manufacturing sectors analyzed by the Bureau of Labor Statistics (BLS), “industrial and commercial machinery,” where computers are produced, showed the most rapid annual TFP growth: 3.4 percent per year from 1990 to 1993. Total manufacturing, on the other hand, showed TFP growth of just 1.2 percent for the same period. Although these estimates are not directly comparable to those derived from the 35-sector database, they confirm the importance of the computer-producing sector in economy-wide TFP growth.

Given the substantial work by BEA on computer prices, real output growth in the computer-producing sector is probably among the best measured. Thus, the estimates of rapid output and TFP growth in the computer-producing sector appear sound. Furthermore, these results support the conventional wisdom that computers are more powerful, affordable, and widespread than ever. Recent work at BEA, however, suggests that constant-quality price indexes should also be used for other production inputs. If the quality of these other inputs such as semiconductors is improving rapidly but costing less, TFP growth will be overstated in the sectors that use these inputs and understated in the sector that produces them. This kind of mismeasurement primarily affects the allocation of TFP among sectors, not the economy-wide total TFP.

The substitution of computers for other, more expensive, inputs goes a long way toward explaining the computer paradox. The impact of computers is observable not in TFP, as many observers perhaps expected, but in the accumulated stock of computer capital. This explains why, despite the pickup in labor productivity growth after 1979, economy-wide TFP growth has remained low. For most sectors, computers are a measured input that contributes directly to economic growth. Rapid TFP growth occurs primarily in the computer-producing sector, where faster, better computers are continually offered at ever-lower prices. This reflects fundamental technological advances that are driving the computer revolution and makes a substantial contribution to economy-wide TFP growth.

Moreover, there is little indication that this growth will slow. BLS, for example, projects that labor productivity growth in the computer and office equipment industry will accelerate to 9.9 percent per year through 2005. If these projections are correct and companies continue to substitute relatively inexpensive computers for costlier older models, computers will become an increasingly important source of economic growth.

Saving Medicare

For the past generation, ensuring access to health care and financial security for older Americans and their families under the Medicare program has been an important social commitment. The elderly are healthier now than before Medicare was enacted and they and their families are protected from financial ruin caused by high medical expenses. Medicare has also helped to narrow the gap in health status between the most and least well-off, and it has contributed to medical advances by supporting research and innovation that have led to increased health, vitality, and longevity among the elderly.

Now this remarkable social commitment to older persons may be weakening, largely because of Medicare’s faltering finances. As everyone knows, Medicare’s costs are rising rapidly, and members of the huge baby boom generation will soon begin to retire. Although the boomers are expected to be healthier in their older years than their parents and grandparents were, the surge in the number of people over 65 will probably mean much more chronic illness and disability in the population as a whole and will raise the demand for health services and long-term care. Thus, ensuring the continued commitment of health protection to the elderly will require bolstering the Medicare program.

Medicare now pays only about 45 percent of the total health care costs of older Americans.

Medicare spending topped $200 billion in 1996. Despite an overall slackening of medical price inflation over the past few years, Medicare spending has continued to outpace private health insurance spending. The possibility that Medicare outlays will continue to grow faster than overall federal spending is a major concern. Medicare’s share of the federal budget is expected to increase from under 12 percent in 1997 to 16 percent by 2008.

The solvency of the Medicare Part A trust fund depends on a simple relationship: income into the fund must exceed outlays. In 1995, for the first time since Medicare began, outlays exceeded income. In 1997, Part A expenditures were $139.5 billion, whereas income was $130.2 billion. The difference was made up from a reserve fund, which had $115.6 billion left in it at the end of 1997. By 2008, the reserves are expected to be depleted.

Because the first baby boomers become eligible for Medicare in 2011, action is needed now to solve the program’s long-term financial problems. The combination of increased federal outlays for millions of aging Americans and projected smaller worker-to-retiree ratios will most assuredly lead to the Medicare program’s inability to meet the health care needs of next century’s elderly population. This scenario is unavoidable without substantial reductions in the growth of Medicare spending, increased taxes to cover program costs, or both.

A 17-member national Bipartisan Commission on the Future of Medicare is now considering changes needed to shore up the program. The commission’s mandate is to review Medicare’s long-term financial condition, identify problems that threaten the trust fund’s financial integrity, analyze potential solutions, and recommend program changes. A final report is due to Congress by March 1, 1999. Two approaches the commission is considering are raising the Medicare eligibility age from age 65 to 67 and requiring beneficiaries to pay more of the program’s costs. Congress has already demonstrated its willingness to restructure the program. In 1997, with little public input or comment, the Senate voted to gradually raise the Medicare eligibility age to 67 years, an action that would have ended the entitlement to Medicare for persons age 65 and 66. Fortunately, the House refused to support this measure. But the Senate vote underscores how great the stakes are in the current debate.

Fixing Medicare’s fiscal problems will require difficult choices. Whatever changes are made will ultimately affect every American directly or indirectly. To assess the proposals now on the table, it is essential that people of all ages have a basic understanding of the Medicare program: how it is financed, what services it pays for, and its current limitations.

Medicare basics

The Medicare program, enacted in 1965, provides health insurance for persons 65 and older if they or their spouses are eligible for Social Security or Railroad Retirement benefits. Also eligible are disabled workers who have received Social Security payments for 24 months; persons with end-stage renal disease; and dependent children of contributors who have become disabled, retired, or died. When Medicare was enacted, only 44 percent of older Americans had any hospital insurance. Today, Medicare provides coverage to 98 percent of those 65 and older (more than 33 million people) as well as 5 million disabled persons.

Although Medicare is referred to as a single program, it has two distinct parts-one for hospital insurance and one for physician services- with separate sources of financing. Enrollment in Part A, the Medicare Hospital Insurance Trust Fund, is mandatory for those eligible and requires no premium payments from eligible persons. Persons not eligible can purchase Part A coverage; the cost of the premium depends on how much covered employment a person has. Persons with no covered employment must pay the full actuarial cost, which was $3,700 per year in 1997. Enrollment in the Supplementary Medical Insurance Trust (Part B) is voluntary and is limited to those who are entitled to Part A. Nearly all persons over 65 who are eligible for Part A also purchase Part B coverage. The current premium is $43.80 per month. (Those who choose not to enroll often have generous coverage as a retirement benefit.) Nearly 80 percent of Medicare beneficiaries also have supplemental insurance, either “Medigap” or a retirement plan.

About 90 percent of Part A income comes from a 2.9 percent payroll tax-half paid by employees and half by employers. Self-employed individuals pay the full 2.9 percent. The rest comes from interest earnings, income from taxation of some Social Security benefits, and premiums from voluntary enrollees. Part B of Medicare is financed primarily through beneficiary premiums (about 25 percent) and general revenues (about 75 percent). Part B does not face the prospect of insolvency because general revenues always pay any program expenditures not covered by premiums.

Should beneficiaries pay more?

One proposal to reduce federal outlays for Medicare would require beneficiaries to pay a larger share of the program’s costs. But the proposal fails to consider a compelling fact: Medicare currently pays only about 45 percent of the total health care costs of older Americans. Contrary to what many Americans assume, Medicare benefits are less generous than typical employer-provided health policies. Although Medicare covers most acute health care services, it does not cover significant items such as many diagnostic tests, eye examinations, eyeglasses, and hearing aids. Most important, it does not cover the cost of prescription drugs.

Many chronic conditions are now successfully controlled with medications. In 1998, Medicare beneficiaries with prescription drug expenses will spend an average of $500 per person on medications. Beneficiaries who need to take multiple drugs on a daily basis can easily pay twice that amount. Of the 10 standard Medigap policies, only the three most expensive ones include prescription drug coverage, and some of these are not sold to people with preexisting health conditions. In short, most seniors are simply out of luck when it comes to insurance protection for prescription drug expenses.

Another gap in protection is for long-term care, which includes health care, personal care (such as assistance with bathing), and social and other supportive services needed during a prolonged period by people who cannot care for themselves because of a chronic disease or condition. Long-term care services may be provided in homes, community settings, nursing homes, and other institutions. Medicare’s coverage of nursing home care and home health care is generally limited to care after an acute episode, and Medigap policies don’t cover long-term care. Private long-term care insurance is available, but very few people over 65 purchase it because of high premium costs. The average cost of a moderately priced policy at age 65 is about $1,800 per year; at age 79, about $4,500.

The federal government is significantly overpaying managed care companies for the care of Medicare beneficiaries.

Part A and B services require considerable cost sharing in the form of deductibles and copayments, and unlike most private health insurance plans, Medicare does not have a catastrophic coverage cap that limits annual financial liability. Since the 1980s, several legislative changes have increased the amount of program costs for which Medicare beneficiaries are responsible through premium increases, higher deductibles, and increased copayments. In 1997, Medicare beneficiaries spent on average about $2,149 or nearly 20 percent of their income on out-of-pocket costs for acute health care. These costs include premiums for Part B and Medigap insurance, physician copayments, prescription drugs, dental services, and other uncovered expenses. These estimates do not include payments for home health care services or skilled nursing facility care not covered by Medicare. In 1995, Medicare beneficiaries who used skilled nursing facilities (less than three percent of all beneficiaries) had an average length of stay of about 40 days, which would have required a copayment of about $1,900. Some medicap policies will pay this. Compared to persons under 65 who have insurance, Medicare beneficiaries pay a significantly larger share of their health care costs through out-of-pocket payments.

Medicare’s home health benefit is a particular focus of proposals to increase cost sharing. The home health benefit covers intermittent skilled nursing visits and part-time intermittent home health aide visits to assist people with tasks such as bathing and dressing. Home health is the only Medicare benefit that does not require cost sharing. A congressional proposal last year would have required a $5 copayment per visit for many Medicare beneficiaries who receive home health care. Although this may appear to be a trivial amount, a closer look at the recipients of home care and the amount of services they receive indicates that the copayment could indeed prove to be unaffordable for a large proportion of home health beneficiaries. About a third of these beneficiaries have long-term needs and, compared with other home health users, are more impaired and poorer than the average Medicare beneficiary. This group of long-term home health users is also more likely to have incomes under $15,000 a year. Because they receive on average more than 80 home health visits a year, a $5 copayment would increase their costs by more than $400 a year. There are serious equity concerns about requiring the poorest and most infirm Medicare beneficiaries to pay such an amount.

Individually purchased Medigap insurance is often medically underwritten and is expensive. Annual premiums can range from $420 to more than $4,800, depending on the benefits offered and the age of the beneficiary. Annual premiums for a typical Medigap plan reached $1,300 in 1997. Despite their high cost, however, many standard Medigap policies offer little protection. Only 14 percent of beneficiaries have policies that cover prescription drugs. Rising Medigap premium costs, fueled in part by the growth of hospital outpatient services that require a 20 percent copayment, have become an issue of growing concern to many older Americans.

Employer-provided retiree health coverage is another important source of financial security for some older Americans. Retirees, especially those who have the least income or the poorest health, value this extra security as well as the comprehensiveness of the insurance as compared to Medigap plans. However, the number of firms that offer health benefits to Medicare-eligible retirees is declining. Fewer than one-third of all large firms now award retirement health benefits. Firms are also limiting their financial obligations to retirees by restricting health benefit options, tightening eligibility requirements (such as required length of employment), and increasing individual cost sharing. And more companies are replacing insurance with defined contribution plans in which retirees get a fixed dollar amount to purchase health benefits. Thus, the value of the employer contribution can diminish over time.

The relationship between Medicare and the supplemental insurance market is complex and will continue to change as the market changes in many areas of the country and as the nature of employer-sponsored retiree coverage evolves. These trends and their interrelationships must be thoroughly assessed when considering possible structural alterations in Medicare. It cannot be assumed that if Medicare raises deductibles and copayments, Medigap policies will cover the increased cost sharing, or if they do, that they will remain affordable.

Medicare beneficiaries, whatever their income, pay the same monthly Part B premiums. Some people believe that requiring people with higher income to pay a larger amount would be more equitable. In 1997, Congress considered but did not enact a proposal that would have made the Part B premium income-related: Medicare beneficiaries with higher incomes would have been required to pay more, and rates for those with low incomes would have remained unchanged. Concerns were raised that any approach to make people with higher incomes pay more would undermine the social insurance basis of the program and encourage higher-income elderly persons to opt out. However, the primary reason Congress dropped the proposal was because administering it would have been complex and costly and because it would not have produced significant new revenue. This is not surprising given that 50 percent of persons 65 and older have incomes less than $15,000 per year.

Age of eligibility

The Bipartisan Medicare Commission is specifically charged to make recommendations on raising the age-based eligibility for Medicare. Proponents argue that because the eligibility age for Social Security will gradually rise to age 67 in 2025, Medicare’s eligibility age should rise as well. This reasoning, however, is based on a misunderstanding of how Social Security works. It isn’t the eligibility age for Social Security that is being raised but rather the age at which a person is eligible to receive full benefits from the program. All persons eligible for Social Security will still have the option of retiring at age 62 with reduced benefits, and many do.

If Medicare eligibility were to truly parallel Social Security eligibility, then Medicare would offer reduced benefits to early retirees who would be required to pay a greater portion of the actuarial value of the Part B premium than current beneficiaries pay. Such a policy was recently proposed by the Clinton administration as a practical way of providing health insurance to people aged 55 to 64 who need to take early retirement. The proposal was criticized by those who questioned its budget neutrality and by those concerned about the affordability of premiums for low-income Medicare beneficiaries, particularly as they reach advanced ages.

Although reducing the number of people who are eligible for Medicare would appear to be a major source of program savings, closer analysis indicates that it is not. A recent study found that even if the eligibility age were raised immediately, no more than a year would be added to the life of the Part A trust fund, because a significant number of people aged 65 and 66 would still qualify for Medicare on the basis of disability. Also, per capita Medicare expenditures for persons 65 and 66 years old are less than two-thirds of the cost for the average beneficiary. It is estimated that raising the eligibility age to 67 would reduce total annual program costs by only 6.2 percent. For this small saving, the number of uninsured 65- and 66-year-olds could reach 1.75 million.

We have to acknowledge the possibility that society will choose to pay more-in short, higher taxes-for Medicare’s continued protection

By any measure, the incidence of health problems increases with advanced age. The earlier the age of retirement, the greater the frequency that poor health or disability are cited as the primary reason for retiring. Health insurance and the access it provides to medical care become more important as people grow older because of the increasing risk of having major and multiple problems. In the current medically underwritten health insurance market, people who are older and who do not have employer-provided insurance are not likely to be covered. Either they are considered medically uninsurable because of preexisting conditions, or they are charged so much because of a preexisting condition that they can’t afford the policy. On the basis of data from the National Medical Expenditure Survey, a rough estimate of the cost of a private, individual insurance policy covering 80 percent of expenses for a 65-year-old person is about $6,000 per year.

It is tempting to assume that if the age for Medicare eligibility were increased, employers who provide health benefits to retirees would simply extend their coverage to fill the gap. In reality, the opposite is likely to happen. Economists at Rand found that between 1987 and 1992, the percentage of employers offering retiree health benefits to persons under 65 decreased from 64 percent to 52 percent. This is not surprising, because retiree coverage of those 65 and older is supplemental to Medicare and therefore costs much less than the coverage provided to those under 65. A recent study found that the average annual employer cost for employers offering coverage per early retiree under 65 was $4,224; for retirees 65 and older, it was $1,663. For at least a decade, U.S. employers have been cutting back on health insurance coverage for workers and retirees, and this trend is expected to continue indefinitely.

Recent court rulings have further contributed to the insecurity of retiree health benefits. In separate 1994 rulings, two federal appeals courts decided that under the Employment Retirement Income Security Act (ERISA), employers can modify or terminate welfare benefit plans, notwithstanding promises of lifetime benefits that were given to employees. In effect, the courts ruled that “informal” documents distributed to employees promising lifetime benefits are basically irrelevant if the actual contract specifies different provisions. In 1995, the U.S. Supreme Court refused to hear an appeal of an Eighth Circuit Court decision that allowed a company to modify its retirees’ health benefits. Also in 1995, the Supreme Court found that employers are generally free under ERISA, for any reason and at any time, to adopt, modify, or terminate welfare and health benefit plans.

Taken together, the high cost of health insurance, the decreasing number of companies offering retiree health benefits, and the unfavorable court rulings make it extremely unlikely that employers are going to step in and provide health care coverage for people aged 65 and 66. Thus, the Medicare Commission should not consider an increase in Medicare’s eligibility age without fully recognizing the limitations of the private health insurance market and the certain increase in the number of uninsured persons. Any reasonable proposal to increase the Medicare eligibility age must include provisions for people aged 65 and 66 to buy into the Medicare program. But this is not a straightforward solution, because it raises questions about the affordability of the actuarially based Medicare premium (about $3,000 in 1997) and the resulting need for subsidies for low-income persons.

Is managed care the savior?

One of the most significant recent developments within the Medicare program is the introduction of managed care. More than 5 million Medicare beneficiaries are enrolled in managed care plans, and the Congressional Budget Office has projected that within a decade, nearly 40 percent of beneficiaries will be enrolled in managed care. Some think that managed care could be the salvation of the Medicare program, because they believe that it can simultaneously reduce overall Medicare expenditure growth and provide better benefits and lower cost sharing for Medicare enrollees. There are valid reasons for thinking that this scenario is in fact too good to be true.

Managed care plans do tend to provide more generous benefits and require less cost sharing than the traditional Medicare program. By accepting limits on their choice of doctors and hospitals, Medicare beneficiaries may secure valued benefits such as prescription drugs, often without additional charges. Managed care plans receive a monthly payment per enrollee from the federal government that is about 95 percent of the average cost of treating Medicare patients in the fee-for-service sector. But because plans tend to attract healthier individuals, whose cost is less than this per capita rate, Medicare pays more than it otherwise would have for these beneficiaries under fee-for-service. One study by the Physician Payment Review Commission found that the cost of treating new Medicare managed care enrollees was only 65 percent of the cost of treating beneficiaries under the fee-for-service system. This overpayment problem is exacerbated by the fact that people can switch from managed care plans to fee-for-service once they become seriously ill, and managed care plans clearly have a strong financial incentive to adopt practices that will encourage them to do so.

How to deal with the distortion of financial incentives caused by people with high medical expenses remains a difficult problem. People with major chronic illnesses who need more extensive medical services than the average older person are not considered attractive enrollees because they pose a major financial liability for health plans. If statistical adjustments in Medicare plan payments based on the health status and diagnosis of enrollees could be developed, health plans would be less likely to avoid enrolling this population and less likely to skimp on their care. However, the development of predictive risk models powerful enough to offset risk selection practices by plans is still many years away.

Thus, achieving Medicare savings from an increase in managed care enrollments is not ensured. Health maintenance organizations (HMOs) may choose to enter the Medicare market if they believe that the per capita payments, relative to costs, will yield financial rewards. However, there may prove to be a fine line between setting federal payments high enough to attract HMOs into the Medicare program and setting plan payments at a level that yields any real program savings. Federal budget officials will need to keep their fingers crossed and hope that Medicare savings from managed care will grow over time as plans gain experience in controlling utilization and reducing lengths of hospital stays among the elderly.

One final proposal to revamp Medicare must be examined. For nearly 20 years, one school of economists has argued that we need to fundamentally alter the way we think about Medicare benefits. Their view is that instead of an entitlement to benefits, a Medicare beneficiary should receive a voucher to be used to purchase private health insurance. Supporters argue that this approach would offer beneficiaries the ability to select a health plan that best meets their health and financial needs, much as federal employees do through the Federal Employees Health Benefits Program. It would also give Congress better control of Medicare spending. But theoretical advantages are tempered by practical questions: how to prevent insurance companies from engaging in risk selection; how to prevent diminished quality of care, particularly in lower-cost plans; and most important, how to ensure that the amount of the voucher is sufficient to purchase at least the same amount of benefits and financial protection that Medicare currently provides. This last concern is particularly important for beneficiaries who have the least income and the greatest medical needs. When considering this approach, the Medicare Commission will need to carefully balance hypothesized cost savings with other societal goals such as affordability, access, and quality of care.

Will society choose to pay more?

Medicare’s structure and financing must evolve if it is to meet the health care needs of the retired baby boom cohort. The changes that will be required will entail difficult policy choices. Raising Medicare’s eligibility age to 67 will cut some costs, but at what price? Are the savings worth the cost of creating a new group of uninsured older Americans? Imposing additional cost-sharing requirements on Medicare beneficiaries would in theory provide a powerful incentive to limit their use of health services. In reality, the availability of supplemental insurance decreases this incentive, and there is evidence that increased cost sharing can decrease the utilization of medically necessary care. This would be a particularly negative and potentially costly outcome for older persons with chronic conditions who require periodic physician and outpatient care. In general, increasing cost sharing is a regressive approach because it shifts costs to those who are sickest and it imposes a greater burden on the poor

We must continue to reduce the rate of increase in Medicare’s costs. We also have to acknowledge the possibility that society will choose to pay more-in short, higher taxes-for Medicare’s continued protection. This option is often dismissed as not politically feasible, but when people understand the personal costs of the alternatives, they may change their minds.

Medicare has strong and enduring public support because it is a universal program. Although Medicare must be put on a sound financial basis, its universal nature must not be undermined, and reform must not come at the high price of increased numbers of uninsured, increased financial insecurity, and reduced care for those who need it the most. President Johnson perfectly captured the larger purpose of the Medicare program when he said “with the passage of this Act, the threat of financial doom is lifted from senior citizens and also from the sons and daughters who might otherwise be burdened with the responsibility for their parents’ care. [Medicare] will take its place beside Social Security and together they will form the twin pillars of protection upon which all our people can safely build their lives.” We need to ensure that this protection continues into the 21st century.

No Productivity Boom for Workers

America’s love affair with the new technologies of the Information Age has never been more intense, but nagging questions remain about whether this passion is delivering on its promise to accelerate growth in productivity. Corporate spending on information technology hardware is now running in excess of $220 billion per year, easily the largest line item in business capital spending budgets. And that’s just the tip of the cost iceberg, which has been estimated at three to four times that amount if the figure includes software, support staff, networking, and R&D–to say nothing of the unrelenting requirements of an increasingly short product-replacement cycle.

Many believe there are signs that the long-awaited payback from this technology binge must now be at hand. They look no further than the economic miracle of 1997, a year of surging growth without inflation. How could the U.S. economy have entered the fabled land of this “new paradigm” were it not for a technology-led renaissance in productivity?

The wisdom of corporate America’s enormous bet on information technology has never been tested by a cyclical downturn in the real economy.

The technology-related miracles of 1997 go well beyond the seeming disappearance of inflation. The explosion of the Internet, the related birth of electronic commerce, and the advent of fully networked global business are widely viewed as mere hints of the raw power of America’s emerging technology-led recovery. The most comprehensive statement of this belief was unveiled in a legendary article in Wired by Peter Schwartz and Peter Leyden. It argues that we “are riding the early waves of a 25-year run of a greatly expanding economy.” It’s a tale that promises something for everyone, including the disappearance of poverty and geopolitical tensions. But in the end, it’s all about the miracles of a technology-led resurgence in productivity growth. This futuristic saga has become the manifesto of the digital age.

Against this backdrop, the “technology paradox”–the belief that the paybacks from new information technologies are vastly overblown–seems hopelessly outdated or just plain wrong. Could it be that the hype of the Information Age is supported by economic data. Ultimately, the debate boils down to productivity, which is the benchmark of any economy’s ability to create wealth, sustain competitiveness, and generate improved standards of living. Have the new technologies and their associated novel applications now reached a critical mass that is introducing a new era of improved and sustained productivity growth that benefits the nation as a whole? Or does the boom of the 1990s have more to do with an entirely different force: namely, the tenacious corporate cost-cutting that has benefited a surprisingly small proportion of the actors in the U.S. economy?

A glacial process

For starters, we should remember that shifts in national productivity trends are typically slow to emerge. That shouldn’t be surprising; aggregate productivity growth represents the synergy between labor and capital, bringing into play not only the new technologies that are embedded in a nation’s capital stock but also the skills of workers in using them to boost their productivity.

The paradox begins on the capital stock side of the productivity equation, long viewed as a key driver of any nation’s aggregate productivity potential. Ironically, although surging demands for new information technologies have boosted overall capital spending growth to an 8.5 percent average annual pace over the period from 1993 to 1996 (a four-year surge unmatched since the mid-1960s), there has been no concomitant follow-through in the rate of expansion of the nation’s capital stock. Indeed, the growth of the total stock of business’s capital over the 1990-1996 interval has averaged only 2 percent, the slowest pace of capital accumulation in the post-World War II era and only half the 4 percent average gains recorded in the heyday of the productivity-led recovery in the 1960s.

There is no inherent inconsistency between information technology’s large capital-spending share and small capital-stock share. The disparity reflects a very short product-replacement cycle and the related implication that about 60 percent of annual corporate information technology budgets goes toward replacement of outdated equipment and increasingly frequent product upgrades. In other words, there is little evidence of a resurgence in overall capital accumulation that would normally be associated with an acceleration in productivity growth.

At the same time, the news on the human capital front is hardly encouraging. In particular, there is little evidence that the educational attainment of U.S. workers has moved to a higher level, which should also be a feature of an economy that is moving to higher productivity growth. The nationwide aptitude test results of graduating high school seniors remain well below the levels of the 1960s. Companies may be working smarter, but there are few signs that this result can be traced to the new brilliance of well-educated and increasingly talented workers.

Productivity is all about delivering more output per unit of work time. It is not about putting in more time on the job.

It is important to understand the historical record of shifts in aggregate productivity growth trends. Acceleration was slow to emerge in the 1960s, with the five-year trend moving from 1.75 percent in the early part of the decade to 2.25 percent at its end. Similarly, the great slowdown that began in the late 1970s saw a downshift in productivity growth, from 2 percent to 1 percent, unfold over 5 to 10 years. Even the 1960s changes fell well short of the heroic claims of the New Paradigmers, who steadfastly insist that U.S. productivity growth has gone from 1 percent in the 1980s to 3 to 4 percent in the latter half of the 1990s. Such an explosive acceleration in national productivity growth would outstrip any historical experience (see Figure 1).

But perhaps it is most relevant to examine that slice of activity where the new synergies are presumed to be occurring: the white-collar services sector. According to U.S. Department of Commerce statistics, fully 82 percent of the nation’s total stock of information technology is installed there, in retailers, wholesalers, telecommunications, transportation, financial services, and a wide array of other business and personal service establishments. Not by coincidence, around 85 percent of the U.S. white-collar work force is employed in the same services sector. Thus, the U.S. productivity debate is all about the synergy, or lack thereof, between information technology and white-collar workers.

Where the rubber meets the road

A look at the shifting mix of U.S. white-collar employment provides some preliminary hints about what lies at the heart of the U.S. productivity puzzle. In recent years, employment growth has slowed most sharply in the back-office (that is, processing) categories of information-support workers who make up 29 percent of the service sector’s white-collar work force. In contrast, job creation has remained relatively vigorous in the so-called knowledge-worker categories–the managers, executives, professionals, and sales workers that account for 71 percent of U.S. white-collar employment.

Increasing the productivity of knowledge workers is going to be far more difficult to achieve than previous productivity breakthroughs for blue-collar and farm workers.

The dichotomy between job compression in low value-added support functions and job growth in high value-added knowledge-worker categories is an unmistakable and important byproduct of the Information Age. Capital-labor substitution works at the low end of the value chain, as evidenced by an unrelenting wave of back-office consolidation, but it is not a viable strategy at the high end of the value chain, where labor input tends to be cerebral and much more difficult to replace with a machine. Consequently, barring near-miraculous breakthroughs in artificial intelligence or biogenetic reprogramming of the human brain, productivity breakthroughs in knowledge-based applications should be inherently slow to occur in the labor-intensive white-collar service industry.

Debunking the measurement critique

There are many, of course, who have long maintained that the U.S. productivity puzzle is a statistical illusion. Usually this argument rests on the presumed understatement of service sector output (the numerator in the productivity equation). This understatement reflects Consumer Price Index (CPI) biases that deflate a current-dollar measure of output with what is believed to be an overstated price level. But there’s also a sense that statisticians are simply unable to capture that amorphous construct, the service sector “product.” That may well be the case, although I note that last summer’s multiyear (benchmark) revisions to the Gross Domestic Product (GDP) accounts, widely expected to uncover a chunk of the “missing” output long hinted at by the income side of the national accounts, left average GDP (and productivity) growth essentially unaltered over the past four years.

I worry more about accuracy in measuring the denominator in the productivity equation: hours worked. Existing labor-market surveys do a reasonably good job of measuring the number of employed workers in the United States, but I do not believe the same can be said for the work schedule of the typical employee. I maintain that working time has lengthened significantly over the past decade and could well reduce the accuracy of the labor input number used to derive national productivity.

Ironically, this lengthening of work schedules appears to be closely tied to an increase in work away from the office that is being facilitated by the new portable technologies of the Information Age: laptops, cellular telephones, fax machines, and beepers. Many white-collar workers are now on the job much longer than the official data suggest. Productivity is all about delivering more output per unit of work time. It is not about putting in more (unmeasured ) time on the job. If work time is underreported, then productivity will be overstated no matter what problems exist in the output measurement.

According to a recent Harris Poll, the median number of hours worked per week in the United States rose from 40.6 in 1973 to 50.8 in 1997. This stands in sharp contrast to the 35-hour weekly work schedule assumed in the government’s official estimates of productivity. U.S. workers obviously feel they are working considerably longer hours than Washington’s statisticians seem to believe. The government’s companion survey of U.S. households hints at the same conclusion; it estimates the average 1996 work week in the nonfarm economy at close to 40 hours. That’s far short of the 51 hours reported in the Harris Poll but still considerably longer than the work week used in determining official productivity figures.

Analysis suggests that underreporting of work schedules since the late 1970s has been concentrated in the services sector. The discrepancy is particularly large in the finance, insurance, and real estate (FIRE) component. Similar discrepancies are evident in wholesale and retail trade and in a more narrow category that includes a variety of business and professional services. By contrast, recent trends in both establishment- and household-based measures of work schedules in manufacturing, mining, and construction–segments of the U.S. economy that also have the most reliable output figures–tend to conform with each other.

So what does all this mean for aggregate productivity growth? To answer this question, I have performed two sets of calculations. The first is a reestimation of productivity growth under the work-week assumptions of the Labor Department’s household survey. On this basis, productivity gains in the broad services sector (a nonmanufacturing category that also includes mining and construction) averaged just 0.1 percent annually from 1964 through 1996, about 0.2 percentage points below the anemic 0.3 percent trend derived from the establishment survey. In light of the results of the Harris Poll, this is undoubtedly a conservative estimate of the hours-worked distortion in productivity figures. Indeed, presuming that work schedules in services move in tandem with the results implied by the Harris Poll, our calculations suggest that service sector productivity growth is actually lower, by 0.8 percentage points per year, than the government’s official estimates.

A final measurement critique of the productivity results also bears mentioning: the belief that statistical pitfalls can be traced to those sectors of the economy (such as services) where the data are the fuzziest. This point of view has been argued by Alan Greenspan and detailed in a supporting paper by the Federal Reserve’s research staff. In brief, this study examines productivity results on a detailed industry basis and concludes that because the figures are generally accurate in the goods-producing segment of the economy, there is reason to be suspicious of results in the service sector, especially in light of well-known CPI biases in this segment of the economy. But it may simply be inappropriate to divide national productivity into its industry-specific components. Distinctions between sectors and industries are increasingly blurred by phenomena such as outsourcing, horizontal integration, and the globalization of multinational corporations.

We have performed some simple calculations that suggest that productivity growth would be lower in manufacturing and higher in services if a portion of the employment growth in the surging temporary staffing industry were correctly allocated to the manufacturing sector rather than completely allocated to the services sector as is presently the case. (We start with the assumption that about 50 percent of the hours worked by the help supply industry provides support for manufacturing activities, which is broadly consistent with anecdotal reports from temporary help companies. That 50 percent can then be subtracted from the services sector, where it currently resides in accordance with establishment-based employment accounting metrics, and added back into existing estimates of hours worked in manufacturing. This knocks about 0.5 percentage points off average productivity growth in manufacturing over the past six years and boosts productivity growth in the much larger service sector by about 0.1 percentage point over this same period.)

All this is another way of saying that there are two sides to the productivity measurement debate. Those focusing on the output side of the story have argued that productivity gains in services may have been consistently understated in the 1990s. Our work suggests that the biases stemming from under-reported work schedules could be more than offsetting, leaving productivity trends even more sluggish than the official data suggest.

A new cost structure

Yet another element of the productivity paradox is the link between America’s open-ended commitment to technology and the flexibility of corporate America’s cost structure, especially in the information-intensive services sector. For most of their long history, U.S. service companies were quintessential variable-cost producers. Their main assets were workers, whose compensation costs could readily be altered by hiring, firing, and a relatively flexible wage-setting mechanism.

Now, courtesy of the Information Age and a heavy investment in computers and other information hardware, service companies have unwittingly transformed themselves from variable- to fixed-cost producers, which denies this vast segment of the U.S. economy the very flexibility it needs to boost productivity in an era of heightened competition. Moreover, the burdens of fixed costs are about to become even weightier thanks to the outsized price tag on the Great Year 2000 Fix–perhaps $600 billion–yet another example of dead weight in the Information Age.

A few numbers illustrate the magnitude of the new technology bet and its impact on business cost structures. Between 1990 and 1997, corporate America spent $1.1 trillion (current dollars) on information technology hardware alone, an 80 percent faster rate of investment than in the first seven years of the 1980s. At the same time, the information technology share of business’s total capital stock (expressed in real terms) has soared from 12.7 percent in 1990 to an estimated 19.1 percent in 1996. The recent surge in this ratio is a good approximation of the ever-expanding increases in fixed technology costs that are now viewed as essential in order to keep transaction-intensive and increasingly global service companies in business.

To be sure, a large portion of these outlays is written off quickly. Nevertheless, with their tax-based service lives typically clustered over three to five years, about $460 billion still remains on the books, which is a little over 40 percent of cumulative information technology spending since 1990. This is hardly an insignificant element of overall corporate costs; by way of comparison, total U.S. corporate interest expenses are presently running at about $400 billion annually.

Let me also stress that the wisdom of corporate America’s enormous bet on information technology has never been tested by a cyclical downturn in the economy. Under such circumstances, it is highly unlikely that U.S. businesses will prune those costs aggressively. After all, information technology is now widely viewed as a critical element of the business infrastructure, essential to operations and therefore not exactly amenable to the standard cost-cutting typically employed to sustain profit margins. Lacking the discretion to pare the technology, managers will be under all the more pressure to slash labor costs. Yet that strategy may also be quite difficult to implement in the aftermath of the massive head-count reductions made earlier in the 1990s.

Whenever it comes, the next recession will be the first cyclical downturn of the Information Age. And it will find corporate America with a far more rigid cost structure than has been the case in past recessions. This suggests that the next recession might also take a far greater toll on corporate earnings than has been the case in past recessions, a possibility that is completely at odds with the optimistic profit expectations that are currently being discounted by an ever-exuberant stock market. In short, the next shift in the business cycle could well provide an acid test of the two competing scenarios of the productivity-led renaissance and the technology paradox. Stay tuned.

Cost cutting vs. productivity

Let me propose an alternative explanation for the so-called earnings miracles of the 1990s. I am not one of those who believes that explosive gains in the stock market over the past three years are a direct confirmation of the (unmeasured) productivity-led successes in boosting corporate profit margins. A better explanation might be an extraordinary bout of good old-fashioned slash-and-burn cost cutting. Consider the unrelenting surge of downsizing that were a hallmark of the 1990s. Whether such strategies took the form of layoffs, plant closings, or outsourcing, the result was basically the same–companies were making do with less. Sustained productivity growth, by contrast, hinges on getting more out of more–by realizing new synergies between rapidly growing employment and the stock of capital. That outcome has simply not been evident in the lean and mean 1990s. As can be seen in Figure 2, recent trends in both hiring and capital accumulation in the industrial sector have been markedly deficient when compared with the long sweep of historical experience.

How can this be? Doesn’t the confluence of improved competitiveness, upside earnings surprises, and low inflation speak of a nation that is now realizing the fruits of corporate productivity? Not necessarily. In my view, it is impossible to discern whether such results have been driven by intense cost cutting or by sustained productivity growth. The evidence, however, weighs heavily in favor of cost cutting. Not only is there a notable lack of improvement in official productivity results for the U.S. economy, there is also persuasive evidence that corporate fixation on cost control has never been greater.

This conclusion should not be surprising. It simply reflects the extreme difficulty of raising white-collar productivity. This intrinsically slow process may be slowed even further if the challenge is to boost the cerebral efficiencies of knowledge workers. And slow improvement may not be enough for corporate managers (and shareholders) confronting the competitive imperatives of the 1990s. As a result, businesses may have few options other than more cost cutting. If that’s so, then the endgame is far more worrisome than the one implied in a productivity-led recovery. In the cost-cutting scenario, companies will become increasingly hollow, lacking both the capital and the labor needed to maintain market share in an ever-expanding domestic and global economy.

Indeed, there are already scattered signs that corporate America may have gone too far down that road in order to boost profits. Recent production bottlenecks at Boeing and Union Pacific are traceable to the excesses of cost cutting and downsizing that occurred in the late 1980s and early 1990s. In a period of sustained growth in productivity, corporate growth is the antidote to such occurrences. But in a world of unrelenting cost cutting, bottlenecks will become far more prevalent, particularly with the rapid expansion of global markets. And then all the heavy lifting associated with a decade of corporate restructuring could quickly be squandered.

The fallacy of historical precedent

Yet another flaw in the productivity revivalist script is the steadfast belief that we have been there before. The New Paradigm proponents argue that the Agricultural Revolution and the Industrial Revolution were part of a continuum that now includes the Information Age. It took a generation for those earlier technologies to begin bearing fruit, and the same can be expected of the long-awaited technology payback of the late 20th century. Dating the advent of new computer technologies to the early 1970s, many are quick to argue that the payback must finally be at hand.

This is where the parable of the productivity-led recovery really falls apart. The breakthroughs of the Agricultural and Industrial Revolutions were all about sustained productivity growth in the creation of tangible products by improving the efficiency of tangible production techniques. By contrast, the supposed breakthroughs of the Information Age hinge more on an intangible knowledge-based product that is largely the result of an equally intangible human thought process.

It may well be that white-collar productivity improvements are simply much harder to come by than blue-collar ones. That’s particularly true in the the new global village. It’s a cross-border operating environment that also crosses multiple time zones and involves new complexities in service-based transactions. That’s certainly the case in the financial services industry, where increasingly elaborate products with multidimensional attributes of risk (such as currencies, credit quality, and a host of systemic factors) are now traded 24 hours a day. In the Information Age, much is made of the exponential growth of computational power. I would argue that the complexity curve of the tasks to be performed has a similar trajectory, suggesting that there might be something close to a standoff between these new technological breakthroughs and the problems they are designed to solve.

The issue of task complexity is undoubtedly a key to understanding the white-collar productivity paradox. The escalating intricacy of knowledge-based work demands longer schedules, facilitated by the portable technologies that make remote problem-solving feasible and, in many cases, mandatory. Whether the time is spent surfing the Web, performing after-hours banking, or hooking up to the office network from home, hotel, or airport waiting lounge, there can be no mistaking the increasingly large time commitment now required of white-collar workers.

Nor is it clear that information technologies have led to dramatic improvements in time management; witness information overload in this era of explosive growth in Web-based publishing, a phenomenon that far outstrips the filtering capabilities of even the most powerful search engines. The futuristic saga of the productivity-led recovery fails to address the obvious question: Where does this incremental time come from? The answer is that it comes increasingly out of leisure time, reflecting an emerging conflict between corporate and personal productivity.

This is consistent with the previous critique of productivity measurement. Productivity enhancement, along with its associated improvements in living standards, is not about working longer but about adding value per unit of work time. This is precisely what’s lacking in the Information Age.

Paradigm lost?

There can be no mistaking the extraordinary breakthroughs of the new technologies of the Information Age. The faster, sleeker, smaller, and more interconnected information appliances of the late 1990s are widely presumed to offer a new vision of work, leisure, and economic and social hierarchies. But is this truly the key to faster productivity growth for the nation?

My answer continues to be “no”; or possibly, if I don my rose-colored glasses, “not yet.” Improvements in underlying productivity growth are one of the most difficult challenges that any nation must confront. And increasing the productivity of knowledge workers in particular is going to be far more difficult to achieve than previous productivity breakthroughs for blue-collar and farm workers.

That takes us to the dark side of America’s technology paradox. Rushing to embrace the New Paradigm entails a real risk of overlooking the most basic and powerful benefit of an improvement in overall productivity: an increase in the national standard of living. On this, the evidence is hardly circumstantial: more than 15 years of virtual stagnation in real wages, an unprecedented widening of inequalities in income distribution, and a dramatic shift in the work-leisure tradeoff that puts increasing stress on family and personal priorities. At the same time, there can be no mistaking the windfalls that have accrued to a small slice of the U.S. population, mainly those fortunate managers, executives, and investors who have benefited from the corporate earnings and stock market bonanza of the 1990s.

In the end, I continue to fear that much of the debate over the fruits of the Information Age boils down to the classic power struggle between capital and labor. I find it difficult to believe that corporate America can cut costs forever; there really is a limit to how far managers can take the credo of “lean and mean,” and there are signs that the limit is now in sight. I find it equally difficult to believe that workers will continue to acquiesce in a system that rewards few for the efforts of many, especially in view of the dramatic cyclical tightening of the labor market that has taken the national unemployment rate to its lowest level in 24 years. A recent upturn in the wage cycle suggests that the forces of supply and demand are now beginning to weigh in with the same cyclical verdict. All this implies that the pendulum of economic power may be starting a long-overdue swing from capital back to labor, repeating the timeworn patterns of power struggles past.

Like it or not, the New Paradigm perception of a technology-led productivity renaissance is about to meet its sternest test. That test should reflect not only the social and economic pressures of worker backlash but also a classic confrontation between cost-cutting tactics and the pressures of the business cycle. Moreover, to the extent that the technology paradox is alive and well (and that remains my view) the days of ever-expanding profit margins, subdued inflation, and low interest rates could well be numbered. Needless to say, such an outcome would come as a rude awakening for those ever-exuberant financial markets that are now priced for the perfection of the Long Boom.

Making Guns Safer

Children are killing children by gunfire. These deaths are occurring in homes, on the streets, and in schools. When possible solutions to this problem are discussed, conversation most often focuses on the troubled youth. Interventions involving conflict resolution programs, values teaching, reducing violence on television, and making available after-school activities and positive role models are proposed. Although each of these interventions may provide benefits, they are, even in combination, inadequate to eliminate childhood shootings. Behavior-modification programs cannot possibly reach and successfully treat every troubled youth capable of creating mayhem if he or she finds an operable firearm within arm’s reach.

But behavior modification isn’t the only possible solution. Another intervention is now being developed: the personalized gun, a weapon that will operate only for the authorized user. Personalized guns could reduce the likelihood of many gun-related injuries to children as well as adults. They could be especially effective in preventing youth suicides and unintentional shootings by young children. Personalized guns could also reduce gun violence by making the many firearms that are stolen and later used in crime useless to criminals. Law enforcement officers, who are at risk of having their handgun taken from them and being shot by it, would be safer with a personalized gun.

About 36,000 individuals died from gunshot wounds in 1995; of these, more than 5,000 were 19 years of age or younger. Suicide is among the leading causes of death for children and young adults. In 1995, more than 2,200 people between 10 and 19 years of age committed suicide in the United States, and 65 percent of these used a gun.

Adolescence is often a turbulent stage of development. Young people are prone to impulsive behavior, and studies show that thoughts of suicide occur among at least one-third of adolescents. Because firearms are among the most lethal methods of suicide, access to an operable firearm can often mean the difference between life and death for a troubled teenager. Studies have shown a strong association between adolescent suicide risk and home gun ownership. Although the causes of suicide are complex, personalizing guns to their adult owners should significantly reduce the risk of suicide among adolescents.

Personalized guns could be especially effective in preventing teenage suicides and unintentional deaths and injuries of children.

The number of unintentional deaths caused by firearms has ranged between 1,225 and 2,000 per year since 1979. Many of the victims are young children. In 1995, the most recent year for which final statistics are available, 440 people age 19 and younger, including 181 that were under 15, were unintentionally killed with guns.

Some have argued that the best way to reduce these unintentional firearm deaths is to “gun proof” children rather than to child-proof guns. It is imprudent, however, to depend on adults’ efforts to keep guns away from children and children’s efforts to avoid guns. Firearms are available in almost 40 percent of U.S. homes, and not all parents can be relied upon to store guns safely. Surveys have documented unsafe storage practices, even among those trained in gun safety.

Stolen guns contribute to the number of gun-related deaths. Experts estimate that about 500,000 guns are stolen each year. Surveys of adult and juvenile criminals indicate that thefts are a significant source of guns used in crime. Roughly one-third of the guns used by armed felons are obtained directly through theft. Many guns illegally sold to criminals on the street have been stolen from homes. Research on the guns used in crime demonstrates that many are no more than a few years old. Requiring all guns to be personalized could, therefore, limit the availability of usable guns to adult and juvenile criminals in the illegal gun market.

Advancing technology

The idea of making a gun that some people cannot operate is not new. Beginning in the late 1880s, Smith & Wesson made a handgun with a grip safety and stated in its marketing materials that “…no ordinary child under eight can possibly discharge it.” More recently, some gun manufacturers have provided trigger-locking devices with their new guns. But trigger locks require the gun owner’s diligence in re-locking the gun each time it has been unlocked. Also, handguns are frequently purchased because the buyer believes he needs and will achieve a form of immediate self-protection. These gun owners may perceive devices such as trigger locks as a hindrance when they want the gun to be immediately available. Also, some trigger locks currently on the market are so shoddy that they can easily be removed by anyone.

Today, a number of technologies are available to personalize guns. For example, magnetic encoding has long been available for the personalization of guns. Magna-TriggerTM markets a ring that contains a magnet, which, when properly aligned with a magnet installed in the grip of the gun, physically moves a lever in the grip of the firearm, allowing the gun to fire. However, the Magna-TriggerTM system is not currently built into guns as original equipment; it must be added later. Because the gun owner must take this additional step and because the magnetic force is not coded to the gun owner, this technology is not optimal.

Another technology-touch memory-was used in 1992 by Johns Hopkins University undergraduate engineering students to develop a non-firing prototype of a personalized gun. Touch memory relies on direct contact between a semiconductor chip and a reader on the grip of the gun. A code is stored on the chip, which is placed on a ring worn by the user. The gun will fire only if the reader recognizes the proper code on the chip.

Another type of personalized gun employs radio frequency technology, for which the user wears a transponder imbedded in a ring, a watch, or pin attached to the user’s clothing. A device within the firearm transmits low power radio signals to the transponder, which in turn “notifies” the firearm of its presence. If the transponder code is one that has previously been entered into the firearm, the firearm “recognizes” it and is enabled. Without the receipt of that coded message, however, a movable piece within the gun remains in a position that mechanically blocks the gun from firing. One major gun manufacturer has developed prototypes of personalized handguns using radio frequency technology and expects to market these guns soon.

The personalization method of the near future appears to be fingerprint reading technology. A gun would be programmed to recognize one or more fingerprints by use of a tiny reader. This eliminates the need for the authorized user to wear a ring or bracelet. Regardless of the technology that is ultimately chosen by most gun manufacturers, several gun magazines have advised their readers to expect personalized handguns to be readily available within the next few years.

Prices for personalized handguns will be higher than for ordinary handguns. The Magna-TriggerTM device can be fitted to some handguns at a cost of about $250, plus $40 for the ring. One gun manufacturer originally estimated that personalizing a handgun would increase the cost of the gun by about 50 percent; however, with the decreasing cost of electronics and with economies of scale, the cost of personalization should substantially decrease. Polling data show that the gun-buying public is willing to pay an increased cost for a personalized handgun.

Regulating gun safety

Most gun manufacturers have not yet indicated that they will redesign their products for safety. When the manufacturers of other products involved with injuries were slow to employ injury prevention technologies, the federal government forced them to do so. But the federal government does not mandate safety mechanisms for handguns. The Consumer Product Safety Commission, the federal agency established by Congress to oversee the safety of most consumer products, is prohibited from exercising jurisdiction over firearms. However, bills have been introduced in several states that would require new handguns to be personalized. Regulation and litigation against firearms manufacturers may also add to the pressure to personalize guns.

Important legislative and regulatory efforts have already taken place in Massachusetts. The state’s attorney general recently promulgated the nation’s first consumer protection regulations regarding handguns. The regulations require that all handguns manufactured or sold in Massachusetts be made child-resistant. If newly manufactured handguns are not personalized, then stringent warnings about the product’s danger must accompany handgun sales. Bills affecting gun manufacturers’ liability have also been introduced in the state legislature. The proposed legislation imposes strict liability on manufacturers and distributors of firearms for the deaths and injuries their products cause. Strict liability would not be imposed, however, if a firearm employs a mechanism or device designed to prevent anyone except the registered owner from discharging it.

A bill recently introduced in California would require that concealable handguns employ a device designed to prevent use by unauthorized users or be accompanied by a warning that explains the danger of a gun that does not employ a “single-user device.” A bill introduced in the Rhode Island legislature would require all handguns sold in the state to be child-resistant or personalized.

To aid legislative efforts that would require personalized guns, the Johns Hopkins Center for Gun Policy and Research has developed a model law entitled “A Model Handgun Safety Standard Act.” Legislation patterned after the model law has been introduced in Pennsylvania, New York, and New Jersey.

One objection to legislation requiring handguns to be personalized is that the technology has not yet been adequately developed. But in interpreting the validity of safety legislation, courts traditionally have held that standards need not be based upon existing devices. For example, in a 1983 case involving a passive-restraint standard promulgated pursuant to the National Traffic and Motor Vehicle Safety Act of 1966, the Supreme Court ruled that “…the Act was necessary because the industry was not sufficiently responsive to safety concerns. The Act intended that safety standards not depend on current technology and could be `technology-forcing’ in the sense of inducing the development of superior safety design.”

The model handgun safety legislation mandates the development of a performance standard and provides an extended time for compliance-two features the courts have said contribute to the determination that a standard is technologically feasible. A performance standard does not dictate the design or technology that a manufacturer must employ to comply with the law. The model law calls for adoption of a standard within 18 months of passage of the law, with compliance beginning four years after the standard is adopted.

Legislative efforts to promote the use of personalized guns can be complemented by litigation. For some time, injury-prevention professionals have recognized that product liability litigation fosters injury prevention by creating a financial incentive to design safer products. One lawsuit is already being litigated in California against a gun manufacturer in a case involving a 15-year-old boy who was shot unintentionally by a friend playing with a handgun. The suit alleges that, among other theories of liability, the handgun was defective because its design did not utilize personalization technology. Additional cases against gun manufacturers for failure to personalize their products can be expected.

Firearm manufacturers need to realize the benefits of personalized guns. The threat of legislation, regulation, or litigation may be enough to convince some manufacturers to integrate available personalization technologies into their products. When personalized guns replace present day guns that are operable by anyone, the unauthorized use of guns by children and adolescents will decrease, as will the incidence of gun-related morbidity and mortality.

Rethinking pesticide use

In Nature Wars, Mark L. Winston argues that the public’s equally intense phobias about pests and pesticides often result in irrational pest control decisions. In many situations our hatred of pests leads to unwarranted use of pesticides that poison the environment. In other cases, our fear of pesticides prompts us to let real pest problems grow out of control. Winston’s message is that effective education of public leaders as well as typical homeowners would do much to scale back the war on pests as well as unnecessary pollution.

Winston, professor of biological sciences at Simon Fraser University in British Columbia, Canada, uses a series of well-chosen anecdotes as the primary tool for conveying his message. For example, he tells how a media-fueled battle among scientists, politicians, and environmentalists in Vancouver, British Columbia, almost resulted in a ban on aerial spraying of the bacterial insecticide Bt. The spraying was aimed at eliminating a 1991 gypsy moth infestation that threatened lumber exports to the United States. To allay public concerns, the government chose a product widely used by organic farmers. But a few environmental groups, armed with one report of a harmless Bt bacteria isolated from a patient with an eye lesion, convinced many citizens that they and their children would be sprayed with harmful bacteria. The battle over spraying went on for months.

Although some Vancouver citizens were ready to risk their lumber industry to avoid the Bt spraying, others didn’t seem at all concerned about having hard-core insecticides sprayed inside their homes. A number of exterminators interviewed by Winston said they advised clients to use slow-acting alternatives to insecticides but were bluntly rebuffed. The customers wanted every roach gone by yesterday.

How serious a problem are the chemical tactics used to battle roaches versus the pests themselves? Winston reports that about one-quarter of all U.S. homes are treated for roaches and that 15 percent of all poisonings from five major insecticides (including those used on roaches) occur in homes. Scientific evidence, on the other hand, indicates that low levels of roach infestations do not cause disease to be transmitted. (The most serious concern is allergenicity for some individuals.)

Winston’s point in illustrating this contrast is basic: As long as people don’t have to come face to face with pests, they hate insecticides and are ready to believe that farmers and public officials are allowing the air, water, and food supply to be poisoned. But when a roach or spider is spotted in a kitchen, concern over cancer and the environment often seems to vanish. Because we don’t know how much Raid it takes to kill a spider, we buy the large can. And if we call the exterminator, we want quick service, although many of us would prefer that the technician show up in an unmarked car.

Rational pest control?

Because of this schizophrenic response by the public to pests and pesticides, Winston argues that it will be difficult to institute rational pest management programs. Effective public education is critical to progress on this front, he believes, and there is evidence that education can work. Farmers once sprayed pesticides on crops without even checking to see if a pest was present. But during the past 25 years, the U.S. Agricultural Extension Service has taught many farmers how to use integrated pest management (IPM). In its purest form, IPM involves determining why a pest problem exists and what combination of changes in a farming system, including pesticide use, would reduce the problem with the lowest environmental and economic costs. In cases in which specific IPM research and education programs have been practical and based on rigorous data, significant reductions in pesticide use have resulted and farmers have saved money. Unfortunately, IPM programs have received limited funding; many more farmers still need hands-on IPM training. IPM programs will not really flourish until the public has enough education to demand more funding for training programs.

It is widely acknowledged that a negative psychological response to insects is deeply imbedded in Western culture. Winston argues that this response cannot be overcome simply by accumulating more scientific data. In the short run, he says, money spent on good public relations efforts may be much more useful in establishing rational pest control programs than money spent on research. For example, Winston believes that without the public education campaign launched by the government of British Columbia, the environmentally benign gypsy moth spraying program would ultimately have crashed and burned.

Also imbedded in modern Western culture is a general distrust of technology. How can anyone trust scientists and government officials who used to say that DDT was safe? Unfortunately, the public, environmentalists, and sometimes even Winston seem too willing to trust scientists who find monumental problems with pesticide technology. Surely there are problems associated with pesticides, but I am concerned that some scientists draw unwarranted conclusions about their magnitude.

Dubious estimate

For example, Winston argues that there is a hidden cost to society of $8 billion per year from pesticide use. He bases this estimate on an analysis by David Pimentel and his Cornell University colleagues in their 1993 book The Pesticide Question. Yet there are problems with how these estimates were made. Pimentel et al. estimate that “pesticide cancers” cost society $700 million annually, a figure derived from an unpublished study that concluded that less than 1 percent of the nation’s cancer cases are caused by pesticides. Pimentel and his colleagues base their calculation on the assumption that 1 percent of people get cancer from pesticides, but they could just as justifiably have assumed that the number was zero and that there was no cost. They further estimate a $320 million annual loss caused by adverse pesticide effects on honeybees. But this ignores the fact that without the targeted spraying of bees with selective pesticides, the beekeeping industry recently would have been devastated by acarine pests of the bees.

The largest hidden cost revealed by the Pimentel study is from the death of birds in crop fields. The authors “conservatively” estimate that 10 percent of birds in crop fields die because of pesticide use. At an estimated cost to society of $30 per bird, the total cost is $2 billion. But nature doesn’t work that way. Since organochlorine pesticides were banned, reductions in bird populations have been linked to habitat loss, not to the toxicity of pesticides to vertebrates. Why do so many policymakers (and Winston) accept the Pimentel assessment? Maybe because we love to hate pesticides.

Although Winston believes that pesticide use needs to be reduced, he differs from many environmentally concerned citizens and governments in his ideas about how it should be done. He argues that the use of genetically engineered plants such as those that produce a toxin derived from the Bt bacteria would be far preferable to the spraying of chemical pesticides. Yet a large segment of the public has a different attitude. No matter what the data show about the positive attributes of Bt, anything that is genetically engineered can grab a negative sound bite on TV and be translated into a fear-producing fact. Public concern about the potential hazards of genetically engineered crops have halted commercialization in Europe. In the United States, bioengineered plants that target specific pests without damaging beneficial insects are expected to be planted on more than 10 million acres of farmland during the summer of 1998. The worst problem foreseen by U.S. scientists is that insects will rapidly adapt to the Bt toxins and put us back at square one.

In one of the book’s last chapters, “Moving beyond Rachel Carson,” Winston criticizes the Environmental Protection Agency (EPA) for its focus on regulatory protection instead of providing alternatives to pesticides. EPA, he writes, has become bogged down with the Sisyphean task of assessing the impacts of thousands of new pesticides pouring out of the industrial research pipeline.

In addition, Winston argues that since the publication of Silent Spring, most of the research on alternatives has been conducted and assessed by academic researchers who tend to discover “scientifically interesting but impractical alternatives.” If research were conducted and judged by a more diverse group of stakeholders, including farmers, Winston says, more viable alternatives would be developed. Although not mentioned by Winston, this idea is currently being tested in a Department of Agriculture grant program called Sustainable Agricultural Research and Education, which sponsors only research involving both farmers and scientists. Farmers judge the potential utility of each proposed project before it is funded.

Perhaps the most obvious reason why we haven’t replaced pesticides is because they are so damn cheap. Winston says that if the hidden costs of pesticides could be taxed, the alternatives would become more economically appealing. But taxing the environmental and health effects of pesticide use will be a tough battle. Unlike the case of cigarettes, in which rigorous and voluminous data on societal costs have been collected, the United States still does not have good data on pesticide costs. It is amazing that 35 years after Silent Spring was published, the most often quoted estimate is the dubious Pimentel number. I agree with Winston that it would be wonderful if EPA could offer more leadership in developing alternatives to pesticides. However, I also think it would be useful if EPA could offer leadership in determining what the real costs of current pesticide use are, so that we would actually know just how critical it is to replace specific pesticides or change general patterns of pesticide use.

The Long Road to Increased Science Funding

For decades, the United States has quietly supported one of the key sources of our nation’s innovation and creativity-federal funding of basic scientific, medical, and engineering research. Federal investments in research have yielded enormous benefits to society, spawning entire new industries that now generate a substantial portion of our nation’s economic activity.

Continuation of our nation’s brilliant record of achievement in the creation of knowledge is threatened, however, by the decline in federal R&D spending. In 1965, government investment in R&D was equal to roughly 2.2 percent of gross domestic product (GDP). Thirty-two years later, that figure has dropped to just 0.8 percent. Current projections indicate that the federal R&D budget will continue to decline as a fraction of GDP.

We recently introduced the National Research Investment Act of 1998 (S. 1305), which would double the federal investment in “nondefense basic scientific, medical and precompetitive engineering research” to $68 billion over the next 10 years. The bill would authorize a 7 percent increase (in nominal dollars exclusive of inflation and GDP growth) in funding per year for the science and technology portfolios of 12 federal agencies, including the National Institutes of Health (NIH). The NIH budget would increase from $13.6 billion this year to $27.2 billion by 2008. The bill stipulates that research results be made available in the public domain, that funds be allocated using a peer-review system, and that all of the spending increases be made to fit within the discretionary spending caps established by the balanced budget agreement.

The bill is an important declaration of principles, but it will require 10 years of patient follow-through if its goals are to be realized. With the end of the Cold War, it is time for the scientific and engineering communities to articulate more forcefully the economic value of what they do. Stated bluntly, the research community will have to become organized in a way that it has not been before.

Scientists and engineers are well positioned to make their case to the taxpaying public and their congressional representatives. Universities employ 2.5 million people in this country. That’s more than the employment provided by the automobile, aerospace, and textile industries combined. Think about how influential any one of those industries is in Washington today compared to science and engineering. Moreover, universities are geographically distributed and often are the largest or second largest employer in any given congressional district. The research community is truly a sleeping giant on the U.S. political landscape.

We believe that S. 1305 represents the best opportunity to awaken that latent political force and build a bipartisan national consensus on significantly increasing the federal investment in civilian R&D over the next decade. The bill is a coalition-building vehicle and an argument that a knowledge-based society must continue to grow its most critical resource: its store of knowledge.

The next few months will be a crucial time for building support for R&D investments. Both political parties have largely cleared the decks with respect to the agendas they have been pursuing for the past several years, and recent improvements in the projected five-year revenue outlook give both parties more room to maneuver within the confines of the federal balanced budget agreement. The federal budget pie is now being sliced for the next half-decade. It is an important time, therefore, for the research community to make its case for increased investments in publicly financed research.

We are also encouraged by the policy work being carried out by our colleagues in the House of Representatives. Under the able direction of Rep. Vernon Ehlers of Michigan, and with the blessing of House leadership, the House Science Committee is drafting a policy document that is intended to guide the federal research infrastructure for the next few decades.

We believe that our efforts and those on the House side are complementary. We ask that you in the scientific community engage with us and help us to reinvigorate the federal research enterprise. We need your help to encourage your senators to cosponsor S. 1305, and the House Science Committee needs your input into its important science policy study. Together, we can ensure that our nation remains a leader in science and technology well into the next century.

From the Hill – Spring 1998

Clinton’s proposed big boost in R&D spending faces obstacles

President Clinton’s FY 1999 budget request, which projects the first surplus in nearly 30 years, calls for increased R&D investments, especially for fundamental science, biomedical research, and research aimed at reducing greenhouse gas emissions.

Under Clinton’s plan, federal R&D support would total $78.2 billion, which is $2 billion or 2.6 percent more than in FY 1998. Nondefense R&D spending would rise 5.8 percent to $37.8 billion, and defense R&D would decline by 0.3 percent to $40.3 billion.

Although the president’s proposed R&D budget request is the largest in years, there are obstacles to achieving it. First, the administration will have to convince Congress to buy into its plan to increase discretionary spending above a cap that was set as part of last year’s balanced budget agreement. The administration has established three special funds, or groups of high-priority nondefense programs, that Congress must now consider supporting. One of these, the Research Fund for America, would include most but not all nondefense R&D. (The other two funds would cover transportation and natural resources and environment programs.) The Research Fund for America would get $31.1 billion in FY 1999, up 11 percent from the previous year.

To get around the discretionary cap, the budget would fund $27.1 billion of the $31.1 billion from discretionary spending that is subject to the cap. The remaining $4 billion, essentially representing all of the requested increases for nondefense R&D, would come from new offsetting revenues outside the cap.

One problem with this approach is that $3.6 billion of the additional $4 billion for the Research Fund for America is projected to come from revenues resulting from tobacco legislation, which is a highly contentious issue in this Congress. Thus, in order to fund the requested increases for nondefense R&D programs, Congress will have to do one of four things: enact tobacco legislation that would allocate a portion of the settlement for research programs; increase discretionary spending, thus breaking last year’s agreement; increase discretionary spending and taxes to compensate; or allocate the spending under the current caps, thus requiring offsetting cuts in non-R&D programs.

The president’s nondefense R&D budget has four priorities. First, biomedical research would get a big boost. The National Institutes of Health (NIH) would receive $14.2 billion, up 8.1 percent. Of this amount, the National Cancer Institute would receive $2.5 billion.

Second, the National Science Foundation (NSF), the primary supporter of basic research in most nonbiomedical fields, would receive $2.9 billion, up 11 percent. NSF’s research directorates would each receive double-digit percentage increases, led by a 16.5 percent increase for research in the Computer and Information Science and Engineering directorate.

Third, energy research in the Department of Energy (DOE) would get increased funding, largely because of the U.S. effort to reduce greenhouse gas emissions in response to last year’s Kyoto Protocol on climate change. DOE’s nondefense R&D funding would jump by 11.1 percent to $3.8 billion, with the increase focused on developing energy-efficient technologies. DOE’s defense R&D would rise by 10.4 percent to $3.3 billion, largely because of more spending for the Stockpile Stewardship Program, which is developing computer models to measure the reliability of the nation’s nuclear weapons.

Finally, basic research spending would rise to $17 billion, up 7.6 percent. NIH would get nearly half of this ($8 billion, up 8.4 percent). The Department of Defense basic research account would increase by 6.6 percent to $1.1 billion.

Tobacco deal could be a boon to biomedical research

Members of Congress are less than optimistic about enacting comprehensive tobacco legislation this year. But if a deal is reached, it’s likely that biomedical research will be a big winner.

In June of 1997, various states and the tobacco industry negotiated an agreement that, if approved by Congress, would settle a number of lawsuits and provide the industry with future legal immunity. In exchange, cigarette and smokeless tobacco companies would pay $368.5 billion to federal, state, and local governments over 25 years. Five Senate bills that were introduced in October and November of 1997 would use the tobacco industry’s billions to support biomedical science.

In S. 1411, Sen. Connie Mack (R-Fla.) and Sen. Tom Harkin (D-Iowa) propose to increase funds for medical research by eliminating the ability of tobacco companies to deduct any lawsuit settlements from their taxes. Those funds, estimated at $100 billion, would be used to establish a National Institutes of Health (NIH) Trust Fund for Health Research. Under the terms of the bill, which is cosponsored by nine Democrats and six Republicans and supported by more than 175 organizations, NIH would decide how most of the money would be spent.

S. 1530, proposed by Sen. Orrin G. Hatch (R-Utah), citing the industry’s past “reprehensible” marketing of tobacco, calls for higher punitive damages-$398.3 billion over 25 years. Most of the money would go to various kinds of health-related research and activities at NIH. Hatch also wants a National Tobacco Research Agenda to be prepared annually by the Food and Drug Administration, the Centers for Disease Control and Prevention, NIH, and others. The agenda would outline research concerning the role of tobacco products in causing cancer, genetic and behavioral factors related to tobacco use, the development of prevention and treatment models, the development of safer tobacco products, and brain development in infants and children.

S. 1415, proposed by Sen. John McCain (R-Ariz.), would establish a Public Health Trust Fund and a National Cessation Research Program. McCain’s bill would restrict research funding to the development of methods, drugs, and devices to discourage individuals from using tobacco products and would provide financial assistance to individuals trying to quit using tobacco products.

Sen. Edward M. Kennedy (D-Mass.), in S. 1492, has proposed establishing a National Biomedical and Scientific Research Board to make grants and contracts for the conduct and support of research and training in basic and biomedical research and child health and development. In a companion bill, S. 1491, Kennedy proposes an excise tax of $1.50 per pack on cigarettes, which would bring in $20 billion a year, including $10 billion a year to fund research. Kennedy’s approach differs from other bills in that yearly revenues over an unlimited time period would be generated-$650 billion over the first 25 years. Equivalent legislation has been introduced in the House by Rep. Rosa DeLauro (D-Conn.).

Sen. Frank R. Lautenberg (D-N.J.) has introduced S. 1343, which proposes to increase the cigarette excise tax rate by $1.50 per pack. The revenues would be deposited in a public health and education trust fund. The Lautenberg bill would allocate much less money to research than would the Kennedy bill. Rep. James V. Hansen (R-Utah) has introduced a House version (H.R. 2764S) of the Lautenberg legislation.

Sensible, coherent, long-term S&T strategy sought

With the end of the era of federal budget deficits in sight, House and Senate members from both parties are calling for a doubling of federal nondefense R&D spending during the next 5 to 10 years. At the same time, however, key congressional leaders are warning that future science budgets, in the words of Rep. F. James Sensenbrenner, Jr. (R-Wisc.), “must be justified with a coherent, long-term science policy that is consistent with the need for a balanced budget.”

Last fall, Sensenbrenner, chair of the House Science Committee, and House Speaker Newt Gingrich launched a year-long study to develop “a new, sensible” long-range science and technology (S&T) policy, including a review of the nation’s science and math education programs. They tapped Rep. Vernon Ehlers (R-Mo.), vice chair of the science committee, to lead the study.

Since the end of the Cold War, many policymakers have called for a reconsideration of the role of government, industry, and academia in supporting S&T to better reflect the environment we live in today. Although numerous scholarly reports have recommended various options for a post-Vannevar Bush science policy, the House Science Committee study is the first time that Congress has attempted to address this issue since the mid-1980s.

Ehlers has stated that his goal is to prepare a “concise, coherent, and comprehensive” document by June 1998 in order to obtain the legislative support needed to move ahead. In an effort to maintain bipartisan interest, congressional staffers from both parties have been assigned to assist Ehlers.

Ehlers launched the study by conducting two roundtable discussions. The first involved almost 30 renowned scientists and policy experts; the second included young, early career scientists. During the roundtable discussions, Ehlers and his staff posed a long list of questions on topics such as encouraging industry investment; enhancing collaborative research partnerships among government, industry, and academia; and contributing to international cooperation in research. Readers can contribute to the study by providing their own answers to the questions posed to the experts. To do so, visit the science policy study Web site at <http:www.house.gov/science/science_policy_study.htm>.

Cloning debate heats up

A year after the world learned that an adult mammal had been successfully cloned, the issue of human cloning continues to be a major concern in Congress. After the initial excitement about cloning died down last year, it appeared unlikely that any legislation would be passed soon. But earlier this year Congress was spurred into action after Chicago physicist Richard Seed said he would set up a lab to use somatic cell nuclear transfer, the cloning technique used to create Dolly the sheep, to clone human beings. Seed claimed that he had some financial backing as well as an infertile couple willing to participate in the procedure.

Two important bills were introduced in the Senate. Sen. Bill Frist (R-Tenn.), a medical doctor, and Sen. Christopher Bond (R-Mo.), a longstanding opponent of human embryo research, introduced legislation prohibiting the creation of a human embryo through somatic cell nuclear transfer. Sen. Diane Feinstein (D-Calif.) and Sen. Edward Kennedy (D-Mass.) introduced a bill prohibiting the implantation of a cloned human embryo in a woman’s uterus, thus avoiding the controversial issue of human embryo research.

The Republican leadership in the Senate, seeking to address concerns about human cloning and embryo research, brought a bill equivalent to the Bond-Frist legislation swiftly to the floor for a vote. But Feinstein and Kennedy led a filibuster that could not be broken.

The House has moved more cautiously on this controversial issue. Last year, the Science Committee passed a bill sponsored by Rep. Vernon Ehlers (R-Mich.) that would ban federal funding for human cloning research. Earlier this year, the Commerce Committee held a full-day hearing on the legal, medical, ethical, and social ramifications of cloning.

At the heart of the cloning debate is the question of whether the benefits that cloning research may yield outweigh the possible risks to human morality, identity, and dignity. Not least among the concerns about cloning is the possibility that imperfect techniques could produce damaged human embryos.

Further complicating the cloning debate are recent questions about the validity of the experiment that led to the birth of Dolly. The experiment using adult cells to clone an animal has not yet been duplicated, and skepticism is rising. Ian Wilmut, the Scottish scientist who created Dolly, has promised to prove that Dolly is the real thing.

Debate over database protection continues

A House bill aimed at strengthening copyright protection for database publishers is arousing concern among some scientists, educators, and librarians. H.R. 2652, introduced by Rep. Howard Coble (R-N.C.), seeks to address various concerns raised by database publishers. Under current law, databases are entitled to copyright protection only if the information contained is arranged or selected in an original way. The effort involved in simply compiling the data isn’t enough to justify protection. Database publishers claim that they work long and hard to compile their data, regardless of how it is organized. Current law, they say, leaves them vulnerable to others who wish to duplicate their products. “Without effective statutory protection, private firms will be deterred from investing in database production,” warns the Information Industry Association.

Coble’s bill would prohibit the use of data from a database in a way that would harm the marketability of the original database. The prohibition would apply only if the data in question represented an “investment of substantial monetary or other resources.”

The bill is the latest legislative attempt to increase database copyright protection in the United States. In 1996, then-Rep. Carlos Moorhead introduced a bill that would have created a sui generis model, or a new category of intellectual property protection for databases. In 1997, the U.S. delegation to a meeting of the World Intellectual Property Organization (WIPO) also backed a new category of protection for databases.

But academic and research interests opposed the U.S. proposal, arguing that it did not adequately protect research, educational, and other “fair uses” of data and would give database publishers too much control over the data their products contained. The new protection, they claimed, might prohibitively raise access costs and impede research in data-intensive areas such as study of the human genome and climatology. The United States subsequently dropped the proposal from consideration at the WIPO meeting.

However, the European Union approved a directive calling on its member nations to implement sui generis database protection. Because the European directive would not cover databases from nations without something akin to sui generis protection on their books, the pressure on the United States has increased, resulting in Coble’s proposed legislation.

Opponents of Coble’s bill contend that although the European directive will deny the new sui generis protection for U.S. databases, existing copyright protections will remain in place, leaving U.S. companies no worse off. In addition, Jonathan Band, general counsel of the Online Banking Association, has noted that U.S. companies could still receive sui generis protection if they established subsidiaries in Europe.

Unlike the previous proposals, Coble’s bill does not follow the sui generis model. Instead, it is based on “misappropriation” of data. Although critics acknowledge that moving to the idea of misappropriation is a step in the right direction, they argue that the Coble bill does not include a strong enough exception for nonprofit, scientific, or educational uses of data. “The difficulties of identifying and implementing a suitable balance between incentives to invest and the preservation of both free competition and essential public-good uses should not be underestimated, nor should legislation be rushed in order to meet deadlines imposed by foreign bureaucrats,” said Vanderbilt University law professor Jerome Reichman at a House hearing last fall.

The House Judiciary Committee Subcommittee on Courts and Intellectual Property, which has been considering the bill, was expected to mark it up in early March 1998.

Patent Nonsense

Pending legislation threatens to tilt the intellectual-property playing field toward established market giants and greatly compound the risks for innovators and their backers. The bill’s effects would be so far-reaching that a group of more than two dozen 30 Nobel laureates in science and economics, ranging across the political spectrum from Milton Friedman to Franco Modigliani to Paul Samuelson, have taken the unusual step or writing an open letter of opposition to the U.S. Senate. They warn that the pending legislation threatens “lasting harm to the United States and the world.” According to the protesting laureates, Senate Bill (S.507), championed by Orin Hatch (R-Ut.), will “discourage the flow of new inventions that have contributed so much to America’s superior performance in the advancement of science and technology” by “curtailing the protection obtained through patents relative to the large multinational corporations.”

S. 507, a version of which has already passed in the House, is a multifaceted bill that would make many changes in the U.S. patent system, some of which have desirable aims though not necessarily the mest means. But at the heart of the bill is a provision to create “prior-user rights,” which would undermine one of the fundamental goals of patents: to encourage the publication of inventions to stimulate innovation. The patent system works by giving an inventor a temporary exclusive right to use or license to the others the invention in exchange for publishing the invention so that others can learn from it. Currently, entities that suppress, conceal, or abandon a scientific advance are not entitled to patents or other intellectual property rights. It is the sharing of a trade secret that earns a property right. But under S. 507’s prior-user rights provision, if a company elects to keep an idea secret instead of patenting it, it might still acquire significant property rights by claiming it already had the idea moving toward commercialization when someone else patented it. The company would then be allowed to use the invention without paying royalties to the patent holder. There would be no limits on volume or usage, and a business could be sold with its prior-user rights intact.

The rationale offered for prior-user rights is that because of the costs of patent protection, U.S. companies must choose carefully what they patent because it is impractical to patent every minor innovation in a product or process. Advocates raise the specter that a company that neglects to patent some small change in an important product could be prevented from using the innovation if someone later patented the idea. But former patent commissioner Donald Banner disputes this argument: “Companies don’t have to file patents on every minor invention in order to protect themselves. If something is of marginal value, all companies have to do is publish it. Then it can’t be patented and used against them.”

No need has been demonstrated for moving this bill quickly or for keeping its elements intact.

It is understandable that many lawyers for large corporations, including foreign companies, might covet prior-user rights. But prior-user rights gut the core concepts of the U.S. patent system, because they slow the dissemination of knowledge by promoting the use of trade secrets and destroy the exclusivity that allows new players to attract startup financing. That is why the laureates warn that “the principle of prior-user rights saps the very spirit of that wonderful institution that is represented by the American patent system.”

Robert Rines, an inventor and patent attorney who founded the Franklin Pierce Law Center, warns that “prior-user rights will destroy the exclusivity of the patent contract and thereby chill the venture capital available for many startups.” After taking the sizable risks of R&D and market testing, a fledgling enterprise would collapse if a market giant such as GE, 3M, Intel, Mitsubishi, or Microsoft suddenly followed up with a no-royalty product. Moreover, the litigation costs of challenging the validity of prior-user rights will favor those with deep pockets.

Consider the impact on university technology transfer. According to an MIT study, in 1995 alone, U.S. universities granted 2,142 licenses and options to license, most of them exclusive, on their patents. These licenses provide income for the universities and are often essential to the success of startup companies. Cornelius J. Pings, president of the Association of American Universities, recently wrote Senator Hatch that Hatch’s prior-user rights provision will effectively eliminate a university’s ability to exclusively license inventions. Thus, prior-user rights would dramatically interfere with the university-to-industry innovation process.

Inevitably, the loss of exclusivity in patents will also make university research more dependent on the largess of large companies and put universities in a weaker bargaining position. If universities cannot count on income from exclusive patents to help support research, they will turn to large companies that can provide direct research funding, with universities losing some control over research direction. Moreover, greater reliance on trade secrets, combined with prior user rights, will increase the incentive for industrial espionage, to which the open university environment is particularly susceptible.

There is also a constitutional question. Most legal scholars, including James Chandler, head of the Intellectual Property Law Institute in Washington, D.C., interpret the Constitution’s provision on patents as intending that the property right be “exclusive.” Prior-user rights would eliminate that exclusivity and thus lead to a potentially lengthy legal battle that would put patents on uncertain footing for an extended period.

The bill’s bulk obfuscates

One of the difficulties in talking about S. 507 is that it is not just about prior-user rights: It is a complex omnibus bill that also includes controversial provisions such as corporatizing the patent office and broadening the ability of a patentee’s opponents to challenge a patent within the patent office throughout the life of the patent.

The bill was designed not for reasoned debate of its multiple features but for obfuscation. The sponsors have modified and expanded the bill repeatedly in strategic attempts to placate opponents. Significant differences exist between the bill passed in the House and the one under consideration by the Senate. No one can be certain what would result from a House-Senate conference to merge two bills that are each more than 100 pages long.

The director of Harvard University’s Office of Technology and Trademark Licensing, Joyce Brinton, observes that although the original bill was much worse, “bill modifications to re-examination and prior-user rights have not fixed all the problems.” On balance, says Brinton, “the bill is not a good deal for universities seeking to license the fruits of their research. It should be divided into component parts that can be dealt with separately.”

Says Janna Tom, vice president for external relations for the Association of University Technology Managers, “University organizations have difficulties putting forth a broad consensus position on an entire omnibus bill packed with so many patent issues, some of which we don’t oppose, but some of which, such as prior-user rights, are not favorable to the university tech transfer community. It would be far easier to address issues one by one, but Congress seems reluctant to separate them.”

The House version of the bill (H.R. 400) also suffers from “the attempt to bundle several pieces of patent legislation into one bill,” observes Shirley Strum Kenny, president of the State University of New York at Stony Brook, with the “parts that may be beneficial to all inventors outweighed by the harmful sections.” For example, Kenny and many others support a provision of the bill that lengthens the term of patents by amending recent legislation that effectively shortened the term of many patents.

Patent policy isn’t a topic that lends itself to the usual sausage-making of Congress. Any attempt to seriously improve patent bills should begin with the ability to address its measures separately. “What we want,” says MIT’s Franco Modigliani, “is that the present version (S. 507) should be junked, should not even be presented to the Senate.”

Indeed, no need has been demonstrated for moving this bill quickly or for keeping its elements intact. The more closely one looks at the bill, the more its main thrust appears to be an effort by companies at the top to pull the intellectual property ladder up after them. The patent system may be in need of periodic updating and and fine-tuning to enhance its mission of bringing new blood to our economy, but it is too important to the economic health of the country to be subjected to illconsidered, wholesale overhaul. Repeated corrections of hasty actions will only confuse and clog the system. Let’s take the time to consider each of the proposed changes separately and deliberately.

Spring 1998 Update

Progress begins on controlling trade in light arms

In an article in the Fall 1995 Issues (“Stemming the Lethal Trade in Small Arms and Light Weapons”), I urged that increased international attention be given to the problem of unregulated trafficking in small arms and light weapons. This trade, I argued, had assumed increased significance in recent years because of its insidious role in fueling ethnic, sectarian, and religious conflict. Although heavy weapons are occasionally employed in such conflict, most of the fighting is conducted with assault rifles, machine guns, land mines, and other light weapons. Hence, efforts to control the epidemic of civil conflict will require multilateral curbs on the trade in these weapons.

Although I was optimistic that this problem would gain increased attention in the years to come, I assumed that this would be a long-term process. In recent months, however, the issue has gained considerable international visibility, and a number of concrete steps have been taken to bring it under control.

Several factors account for this rise in visibility. Although a number of major conflicts have been brought under control in recent years, the level of human slaughter produced by ethnic and sectarian violence has shown no sign of abatement. Recent massacres in Algeria and Chiapas have demonstrated, once again, how much damage can be inflicted with ordinary guns and grenades. Efforts to contain the violence, moreover,

have been stymied by recurring attacks on UN peacekeepers and humanitarian aid workers.

Recognizing that international efforts to address the threat of ethnic and internal conflict have been undermined by the spread of guns, a number of governments and nongovernmental organizations (NGOs) have begun to advocate tough new measures for curbing this trade. Most dramatic has been the campaign to ban antipersonnel land mines, which reached partial fulfillment in December 1997 with the signing of an international treaty to prohibit the production and use of such weapons. (The United States was among the handful of key countries that refused to sign the accord.)

Progress has also been made in curbing the illicit trade in firearms. In November 1997, President Clinton signed a treaty devised by the Organization of American States (OAS) to criminalize unauthorized gun trafficking within the Western Hemisphere and to require OAS members to establish effective national controls on the import and export of arms. A similar, if less exacting measure, was adopted by the European Union (EU) in June 1997, and tougher measures will be considered at the G-8 summit this summer.

Further steps were outlined in a report on small arms released by the UN in September 1997. The result of a year-long study by a panel of governmental experts, the report calls on member states to crack down on illicit arms trafficking within their territory and to cooperate at the regional and international level in regulating the licit trade in weapons.

No one doubts that serious obstacles stand in the way of further progress on this issue. Many states continue to produce light weapons of all types and are unlikely to favor strict curbs on their exports. But the perception that such curbs are desperately needed is growing.

The priority, at this point, is to identify a reasonable but significant set of objectives for such efforts. Unlike the situation regarding land mines, a total ban on the production and sale of light weapons is neither appropriate nor realistic, as most states believe that they have a legitimate right to arm themselves for external defense and domestic order. Rather, the task should be to distinguish illicit from licit arms sales and to clamp down on the former while establishing internationally recognized rules for the latter. Such rules should include a ban on sales to any government that engages in genocide, massacres, or indiscriminate violence against civilians; uses firearms to resist democratic change or silence dissidence; or cannot safeguard the weapons in its possession. And, to provide confidence in the effectiveness of these efforts, the UN should enhance transparency in the arms business by including light weapons in its Register of Conventional Weapons.

Michael Klare

Wake-up Call for Academia

Academic Duty is an important book. It provides a corrective to what Donald Kennedy, former president of Stanford University, points to as the academy’s one-sided focus: academic freedom and rights at the expense of academic obligations and responsibilities. The book is structured around chapters dealing with eight dimensions of faculty responsibilities, but it is much more than a manual on academic duties. Rather, it may be seen as a wake-up call, beckoning those in the academy to understand and take their responsibilities seriously or risk jeopardizing an already fragile institution. Indeed, Kennedy’s challenge to faculty is placed in the context of public concern, discontent, anger, and mistrust with and about higher education.

Kennedy bluntly states how important the faculty is: “In the way they function, universities are, for most purposes, the faculty.” Still, it is clear that the book is also targeting another audience. Parents, legislators, trustees, and prospective trustees will find it a first-rate introduction to what is valued by faculty and how colleges and universities are organized and governed. In an introductory section, he gives a brief overview of the history and development of higher education in the United States and addresses the contemporary situation, post-1970, which has been characterized by tight budgets, an aging professorate, and tight job markets. Interested people or observers of higher education can also find out about governance (chapter 5), the role of research (chapter 6), indirect costs in funding (chapter 6), and academic tenure (chapter 5).

What are the duties?

Kennedy writes that “much of academic duty resolves itself into a set of obligations that professors owe to others: to their undergraduate students, to the more advanced scholars they train, to their colleagues, to the institutions with which they are affiliated, and to the larger society.” He develops these duties in chapters entitled “To Teach,” “To Mentor,” “To Serve the University,” “To Discover,” “To Publish,” “To Tell the Truth,” “To Reach Beyond the Walls,” and “To Change.”

“Responsibility to students is at the very core of the university’s mission and of the faculty’s academic duty,” Kennedy writes. Yet the public is beginning to question the university’s commitment to this mission, and many faculty are unprepared for or unclear about their obligations to students. Although students expect faculty to be engaged in teaching, faculty often focus more on scholarly endeavors.

Much of the blame for this situation can be found in the nature of graduate student training. In research universities, where faculty throughout higher education are trained, students hone their skills in research in specialized fields. Then, as newly appointed faculty members, they quickly learn that their primary focus must be on research and publication in order to secure tenure. This intense focus on research, frequently lasting for more than 10 years, makes it unlikely that faculty are suddenly going to change their orientation toward undergraduate teaching, mentoring, and advising.

Except for setting teaching loads, many institutions say little about what faculty members owe their students. “The very fact that ‘professional responsibility’ is taught to everyone in the university except those headed for the academic profession is a powerful message in itself,” Kennedy writes. And the expectations of citizenship are set very low. Until the reward system in terms of tenure, promotion, and salary increments recognizes teaching more fully, the incentive to maintain the status quo will be strong.

Teaching values

In examining the important and controversial question of what to teach, Kennedy focuses in particular on criticism that universities fail to teach values. He asserts that values are important, referring, as I understand him, to basic democratic values such as respect for persons, liberty, equality, justice for all, and fairness. He makes two important points. First, he distinguishes between values and conduct. Referring to a statement made by William Bennett about getting drugs off campus, Kennedy argues that such efforts concern the regulation of conduct, not values. Correct. Second, he stresses the importance of students encountering different traditions and modes of reasoning as the basis for forming their own values. (Elsewhere in the book, he also emphasizes the importance of teaching critical thinking and analysis, a position with which most academics feel comfortable.)

Space does not permit comment on each of the duties addressed by Kennedy, but I will focus on two that struck me as having special significance. The chapter in which Kennedy’s passionate concern is most evident is “To Tell the Truth.” Returning to the theme of the university and public mistrust, he says that “higher education’s fall from grace in the past decade” has resulted partially from research misconduct. The resultant media attention, congressional hearings, and personal attacks within the academic community have caused severe damage.

Kennedy reviews some well-known cases, including those of Robert Gallo, Mikulas Popovic, and David Baltimore, and argues that the academic community to date has failed to deal well with the research misconduct issue. Scientists have been either too tolerant or silent in the face of misconduct or careless in their analysis and judgments, he believes. In turn, universities, with their too private and too nonadversarial internal processes, have erred in two directions: They have been “overly protective of [their] own faculty . . . or overly responsive to external cries for a scalp.” Government investigations, aided by panels of scientists, and prosecution efforts have been ineffective as well, he says.

The upshot is that careers and reputations have been badly and unfairly damaged. Redress of these wrongs has come too late and has often been inadequate. Kennedy suggests that appropriate procedures will have to evolve. Surprisingly, he recommends-contrary to almost all university grievance procedures-early participation by legal counsel and an opportunity to challenge witnesses.

Another chapter that deserves mention is “To Reach Beyond the Walls,” in which Kennedy promotes technology transfer as the newest academic duty. Fulfilling this duty, however, has created some complex conflict of interest problems regarding patenting, limits on faculty obligations to their institutions, and appropriate compensation levels. Kennedy’s discussion of the issues involved usefully demonstrates how new duties raise new problems. This theme is picked up in the final chapter on the duty to change.

Minor flaws

Although Academic Duty is a timely and thoughtful commentary on the current state of higher education, it doesn’t sufficiently address three areas. First, the book is primarily focused on research universities. Although Kennedy tries to include a discussion of liberal arts colleges, state colleges, and community colleges, his analysis is understandably based on his experiences during almost four decades at Stanford. The emphasis on classroom teaching and mentoring, the concept of faculty loads, and the basis for tenure decisions are substantially different in many of the nonresearch-based institutions. Accordingly, faculty in those institutions respond to different expectations and reward systems.

Second, the book is too heavily weighted toward science, Kennedy’s field of work. Yet there are important differences between science and the humanities and social sciences that affect how graduates think about undergraduate teaching. For example, many science graduate students are financially supported by research grants; students in the humanities and many of the social sciences, by teaching assistantships. As a result, while science students are working as lab assistants, other graduate students are assisting in and teaching undergraduate courses. In the best assistantship arrangements, the beginning graduate student works with a professor in an apprentice relationship, learns how to mentor in discussion sessions and while providing guidance on term paper development, and anticipates that teaching will be a major part of his or her professional responsibilities. Indeed, many humanities students find the teaching and mentoring experiences much more rewarding than research and thus decide to focus their careers on teaching.

Finally, Kennedy’s discussion of academic freedom and academic duty as counterparts is a stretch. He correctly points out that “academic freedom refers to the insulation of professors and their institutions from political interference.” He adds, again correctly, that there is too much talk about academic freedom and not enough about academic duty. But I disagree with his suggestion that faculty have neglected their academic duties because of the focus on academic freedom. Although many of the duties that he enunciates are “vague and obscure,” I think that the reasons have little to do with claims of academic freedom. Kennedy acknowledges that the focus on research at the expense of teaching is rooted in the nature of graduate training, not academic freedom. A different set of problems are raised about mentoring, but again they are not based on academic freedom. The problems cited in serving the university and publishing also have many sources other than academic freedom.

Academic Duty can profitably be read by people both inside and outside of the academy. The author knows educational institutions, and, from his rich experience as president of Stanford, he engages the reader in a critical discussion about our obligations to both students and society.

Unleashing Innovation in Electricity Generation

This nation’s electric power industry is undergoing profound change. Just when lawmakers are replacing regulated monopolies with competitive entrepreneurs, a new generation of highly efficient, low-emission, modular power technologies is coming of age. Yet surprisingly little policy discussion, either in the states or in Washington, has focused on how to restructure this giant industry in ways that spur technological innovations and productivity throughout the economy.

Sheltered from competitive forces, the fossil-fuel efficiency of electric utilities is lower today than in 1963. Regulated monopolies have had no incentive to take advantage of technological advances that have produced electric generating systems that achieve efficiencies approaching 60 percent, or as much as 90 percent when waste heat is recovered. As a result, traditional power companies burn twice as much fuel (and produce twice as much pollution) as necessary.

Developing an electricity-generating industry that thrives on innovation will require much more than simply increasing R&D expenditures. Government programs and futuristic technologies are not the answer. Rather, progress will come when the barriers to competition are removed and entrepreneurial companies are freed to recreate the electricity system along market-driven lines.

Utility restructuring, if done this way, can unleash competitive forces that will disseminate state-of-the-art electric systems, foster technological innovations, double the U.S. electric system’s efficiency, cut the generation of pollutants and greenhouse gases, enhance productivity and economic development, spawn a multibillion-dollar export industry, and reduce consumer costs. But helping this new electrical world emerge means overcoming numerous legal, regulatory, and perceptual barriers.

An industry in flux

With assets exceeding $600 billion and annual sales above $210 billion, electric utilities are this nation’s largest industry-roughly twice the size of telecommunications and almost 30 percent larger than the U.S.-based manufacturers of automobiles and trucks. The pending changes affecting this giant industry will have a profound impact on this nation’s economy.

Rapid change and innovation marked the industry’s founding almost a century ago. Thomas Edison, William Sawyer, William Stanley, Frank Sprague, Nikola Tesla, and George Westinghouse competed with an array of new technologies. Each struggled to perfect dynamos that generated power; transformers and lines that delivered it; and incandescent light bulbs, railways, elevators, and appliances that used this versatile energy source.

Their competition sparked a technological and business revolution in the late 19th century. But this early competition created chaos as well as opportunity. Unique electrical arrangements conflicted with one another. More than 20 different systems operated in Philadelphia alone. A customer moving across the street often found that his electrical appliances no longer worked.

To ensure order and to protect themselves from “ruinous competition,” executives initially tried to fix prices and production levels among themselves, but the Sherman Antitrust Act of 1890 rendered such efforts illegal. The more effective step, led by J. P. Morgan and other bankers, was to merge and consolidate.

Within the next few decades, the electricity business changed dramatically. On the engineering front, larger and more efficient generators were built, a new filament constructed of tungsten produced an incandescent lamp that was preferable to a gas flame, and long-distance transmission lines sent power over great distances. As the cost of a kilowatt-hour from a central power station dropped from 22 cents in 1892 to only 7 cents three decades later, electricity became a necessity of life.

On the business front, electric companies became integrated monopolies, generating, transmitting, and distributing electricity to consumers in their exclusive service territories. For some 60 years, electric utilities provided reliable power in exchange for guaranteed government-sanctioned returns on their investments.

Mandating retail competition will not by itself remove the many barriers to innovation, efficiency, and productivity.

Recent policy and technological changes, however, are enabling entrepreneurs to generate power below the average price, ending the notion that this industry is a natural monopoly. These small-scale electricity generators are introducing competition into the electric industry for the first time in three generations. Nonutility production almost doubled from 1990 to 1996 and now contributes some 7 percent of U.S. electricity.

Three pieces of federal legislation opened the door to this limited competition. First, the Public Utilities Regulatory Policy Act (PURPA) of 1978 enabled independent generators to sell electricity to regulated utilities. Second, deregulation of the natural gas market lowered the price and increased the availability of gas, a relatively clean fuel. Third, the Energy Policy Act of 1992 (and subsequent rulings by the Federal Energy Regulatory Commission) made it possible for wholesale customers to obtain power from distant utilities.

Noting the development of wholesale competition, some states (Massachusetts, California, Rhode Island, New Hampshire, Pennsylvania, and Illinois) have adopted specific plans to achieve retail competition, and most other states are considering the issue. Several lawmakers have introduced federal legislation to advance such retail competition, to ensure reciprocity among the states, and to restructure the Tennessee Valley Authority and other federal utilities.

To prepare for competition, some utilities have merged, others have sold their generating capacity, and still others have created entrepreneurial unregulated subsidiaries that are selling power in the competitive wholesale market. It appears that integrated utility monopolies are being divided. A likely scenario is that the emerging electricity industry will include competitive electricity-generating firms producing the power, federally regulated companies transmitting it along high-voltage lines, and state-regulated monopolies distributing the electricity to individual consumers and businesses. Federally chartered independent system operators would ensure the grid’s stability and fair competition.

In addition to PURPA and the Energy Policy Act, several other factors are spurring the drive toward competition in the electricity-generating industry. The paramount concern is cost. The Department of Energy (DOE) estimates that restructuring will save U.S. consumers $20 billion per year; some analysts predict a $60 billion annual savings, or $600 per household. Businesses that consume a substantial amount of electricity have been leading advocates for competition among electricity suppliers.

Environmental concerns further the call for innovation-based electric industry restructuring. The bulk of greenhouse gas emissions responsible for climate change-fully one-third of U.S. carbon dioxide emissions-comes from burning fossil fuels in electric generators. Another third comes from production of thermal energy, and roughly half of that amount could be supplied by heat not used by the electric industry. To appreciate the opportunity for improved efficiency, consider that U.S. electric generators throw away more energy than Japan consumes. Unlike the regulated pollutants that can be scrubbed from power plant smokestacks, the only known way to reduce net carbon dioxide emissions is to burn less fossil fuel. Fortunately, modern technologies can cut emissions in half for each unit of energy produced.

Also pushing utility restructuring are the desire of nonutility power producers to sell at retail, protests about regional disparities in price, and failures of the old planning regime. Proponents of the status quo abound, however. Several analysts concentrate on the potential problems associated with change. Some environmentalists, for instance, fear the potential increased output from dirty coal-fired generators and the potential demise of utility-based demand-side management programs that are designed to help customers use electricity more efficiently.

Most of the debate about utility restructuring, however, has focused on just two issues: when to impose retail competition and whom to charge for the “stranded costs” of utility investments, such as expensive nuclear power plants, which will not be viable in a competitive market. The two issues are related because the longer retail competition is postponed, the more time utilities have to recoup their investments. The strategies proposed for dealing with these issues vary dramatically. Utilities argue that current customers that no longer want to buy electricity from them should be forced to pay an “exit fee” to help pay for the stranded costs. Independent power producers maintain that utilities could pay for stranded costs by improving the efficiency of their operations.

Both approaches raise questions. Although high exit fees would retire utility debt, they also would discourage the growth of independent producers. And one cannot state with certainty how much utilities could save through efficiency improvements, though the potential appears to be substantial. For example, utilities could eliminate the need for an army of meter readers trudging from house to house by installing meters that could be read electronically from a central location. Adding computer-controlled systems that constantly adjust combustion mixes in turbines could increase efficiency by as much as 5 percent.

Only the beginning

The arrival of wires early in this century introduced lights, appliances, and machines that lengthened days, reduced backbreaking drudgery, and sparked an industrial revolution. Still, we are only on the threshold of tapping electricity’s potential value. Innovation can improve the efficiency with which electricity is generated and transmitted. It can enable a wealth of new electrotechnical applications within U.S. industries and for export throughout the world. It also can spark an array of new consumer services.

Consider first the potential for vastly improved electricity generators. Efficiencies of natural gas-fired combustion turbines already have risen from 22 percent in the mid-1970s to 60 percent for today’s utility-sized (400 megawatts) combined cycle units that use the steam from a gas turbine’s hot exhaust to drive a second turbine-generator. Simpler and smaller (5 to 15 megawatts) industrial turbines have electrical efficiencies of about 42 percent and system efficiencies above 85 percent when the waste heat is used to produce steam for industrial processes. Small-scale fluid-bed coal burners and wood chip boilers also produce both electricity and heat cleanly and efficiently. Since 1991, production of thin-film photovoltaic cells has increased more than 500 percent, and more efficient motors and new lightweight materials have reduced the costs of wind turbines by 90 percent.

Several other technologies are on the horizon, including fuel cells that produce electricity through intrinsically more efficient (and cleaner) chemical reactions rather than combustion. The first generation of commercial fuel cell units is expected to achieve 55 percent electric efficiency when they appear on the market in 2001; when used to produce both power and heat, the total system efficiencies will approach 90 percent.

Innovation also is possible in the transmission and distribution grid. Insurers, environmental groups, and others have raised concerns about the grid’s stability and reliability, and growing numbers of digital technology users are concerned about power quality. More and longer-distance exchanges of power in an open electricity market could push the limits of our human-operated electricity dispatch system. Very small errors can become magnified and ripple through the system, increasing the risk of overloadings, fires, and transformer explosions. Fortunately, a host of software, hardware, and management technologies are on the horizon. Sophisticated software based on neural networks (a type of self-organizing system in which a computer teaches itself to optimize power transfers) could greatly increase power quality and reduce the risk of overloads. More robust and efficient distribution technologies, such as high-temperature superconducting transformers and wires, could further cut that risk. Several engineers also envision a distributed or dispersed energy system in which information links increasingly substitute for transmission lines, and most electricity is used in efficient “power islands.” Two-way communication and control between generator and customer can dramatically reduce the need for overcapacity. The more intelligent the system, the easier it will be to ensure that electricity takes the shortest and most efficient path to the customer.

Electric equipment depreciation should be standardized and made similar to that of comparable industrial equipment.

Even the near-term possibilities for new consumer services are substantial. “Imagine the elderly and the poor having a fixed energy bill rolled into their mortgage or rent,” suggests Jeffrey Skilling, president of Enron Corp., one of the new entrepreneurial power producers. “Imagine an electric service that could let consumers choose how much of their home power is generated by renewable resources. Imagine a business with offices in 10 states receiving a single monthly bill that consolidates all of its energy costs.” Because power companies already have a wire connection to virtually every home and business, they are exploring their potential to provide a host of other services, including home security, medical alerts, cable television, and high-speed Internet access.

One promising option is onsite electricity production. “In ten years,” predicts Charles Bayless, chairman of Tucson Electric Power, “it will be possible for a 7-Eleven store to install a small ‘black box’ that brings natural gas in and produces heating, cooling, and electricity.”

In addition to avoiding transmission and distribution losses, onsite power generators offer manufacturers and building owners (or, more probably, their energy service companies) the opportunity to optimize their power systems, which would lead to increased efficiency, enhanced productivity, and lowered emissions. A study by the American Council for an Energy Efficient Economy suggests that such gains ripple through the industrial operation, as productivity benefits often exceed energy savings by more than a factor of four.

Mass-produced, small distributed generators could be a viable alternative to large centralized power plants. To illustrate the practicality of this option, engineers point out that Americans currently operate more than 100 million highly reliable self-contained electric generating plants-their cars and trucks. The average automobile power system, which has a capacity of roughly 100 kilowatts, has a per-kilowatt cost that is less than one-tenth the capital expense of a large electric generator.

Improved electric generators will also spark new technologies and systems within U.S. industry. Noting that electrotechnologies already have revolutionized the flow of information, the processing of steel, and the construction of automobiles, the Electric Power Research Institute (EPRI) envisions future applications that offer greater precision and reliability; higher quality, portability and modularity; enhanced speed and control; and “smarter” designs that can be manufactured for miniaturized end-use applications. Innovative electrotechnologies also will dramatically reduce the consumption of raw resources and minimize waste treatment and disposal.

U.S. development of efficient generators and modern electrotechnologies could also open a vast export market. The growth in global population, combined with the rising economic aspirations of the developing countries, should lead to significant electrification throughout the world.

Such benefits are not pie-in-the-sky ramblings by utopian scientists or overenthusiastic salesmen. According to a study by the Brookings Institution and George Mason University, restructuring and the resultant competition have generated cost savings and technological innovations in the natural gas, trucking, railroad, airline, and long-distance telecommunication industries. “In virtually every case,” they concluded, “the economic benefits from deregulation or regulatory reform have exceeded economists’ predictions.”

Consider the competition-sparked innovations in the telecommunications market. Within a relatively short period, consumer options increased from a black rotary phone to cellular, call waiting, voice mail, paging, long-distance commerce, and video conferencing. Similar gains could occur in the electricity industry.

What’s needed, however, is a policy revolution to accompany the emerging technological revolution. Laws and regulations must become innovation-friendly.

MIT meets the regulators

Although modern electric technologies can provide enormous benefits, implementing them is usually problematic, even for a technological supersophisticate such as the Massachusetts Institute of Technology (MIT). In 1985, MIT began to consider generating its own electricity. With its students using PCs, to say nothing of stereos, hair dryers, and toaster ovens, the university faced soaring electricity costs from the local utility, Cambridge Electric Company (CelCo). Many of MIT’s world-class research projects were also vulnerable to a power interruption or even to low-quality power. At the same time, MIT’s steam-powered heating and cooling system, which included 1950s-vintage boilers that burned fuel oil, was a major source of sulfur dioxide, nitrogen oxides, carbon monoxide, and volatile organic compounds.

The university finally settled on a 20-megawatt, natural gas-fired, combined heat and power (CHP) turbine-heat recovery system. The system was to be 18 percent more efficient than generating electricity and steam independently. It was expected to meet 94 percent of MIT’s power, heating, and cooling needs and to cut its annual energy bills by $5.4 million. Even though MIT agreed to pay CelCo $1 million for standby power, the university expected to recoup its investment in 6.9 years.

The federal government should establish a pollution-trading system for all major electricity-related pollutants, including nitrogen oxides and particulates.

MIT’s first major hurdle was getting the environmental permit it needed before construction could begin. Because it retired two 1950s-vintage boilers and relegated the remaining boilers to backup and winter-peaking duty, the CHP system would reduce annual pollutant emissions by 45 percent, an amount equal to reducing auto traffic in Cambridge by 13,000 round trips per day. Despite this substantial emissions savings, plant designers had problems meeting the state’s nitrogen oxide standard. Unfortunately for MIT, the state’s approved technology for meeting this standard, which was designed for power stations more than 10 times larger than MIT’s generator, was expensive and posed a potential health risk because of the need to store large amounts of ammonia in the middle of the campus. MIT appealed to the regional emission-regulating body, performed a sophisticated life-cycle assessment, and showed that its innovative system had lower net emissions than the state-approved technology that vented ammonia.

Although MIT overcame the environmental hurdle and completed construction in September 1995, that same year it became the nation’s first self-generator to be penalized with a stranded-cost charge. The Massachusetts Department of Public Utilities (DPU), looking ahead to state utility restructuring, approved CelCo’s request for a “customer transition charge” of $3,500 a day ($1.3 million a year) for power MIT would not receive. MIT appealed the ruling in federal court, arguing that it already was paying $1 million per year for backup power, that CelCo had known about MIT’s plans for 10 years and could have taken action to compensate, and that the utility’s projected revenue loss was inflated. But the judges ruled that their court did not have jurisdiction. MIT then appealed to the Massachusetts Supreme Judicial Court, which in September 1997 reversed DPU’s approval of the customer transition charge, remanded the case for further proceedings, and stated that no other CelCo ratepayers contemplating self-generation should have to pay similar stranded costs.

Although MIT now has its own generator, which is saving money and reducing pollution, the university’s experience demonstrates the substantial effort required to overcome regulatory and financial barriers. Very few companies that might want to generate their own power have the resources or expertise that MIT needed to overcome the regulatory obstacles. As states and the federal government move to restructure the electric industry, they have an opportunity to remove these obstacles to innovation.

Barrier busting

Lack of innovation within the U.S. electric industry is not due to any mismanagement or lack of planning by utility executives. Those executives simply followed the obsolete rules of monopoly regulation. Reforming those obsolete rules will give industry leaders the incentive to dramatically increase the efficiency of electricity generation and transmission.

Part of the problem is perceptual. More than two generations have come to accept the notion that electricity is best produced at distant generators. Few question the traditional system in which centralized power plants throw away much of their heat, while more fuel is burned elsewhere to produce that same thermal energy. Few appreciate the fact that improved small-engine and turbine technology, as well as the widespread availability of natural gas, have made it more efficient and economical to build dispersed power plants that provide both heat and power to consumers and that avoid transmission and distribution losses. Because utilities have been protected from market discipline for more than 60 years, few challenge the widespread assumption that the United States has already achieved maximum possible efficiency.

Mandating retail competition will not by itself remove the many barriers to innovation, efficiency, and productivity, as the recent history of monopoly deregulation shows. Federal legislation has deregulated the telephone industry, but some of the regional Bell operating companies have been able to preserve regulations that impede the entrance of new competitors into local telephone markets. The same is likely to be true in the electricity market, particularly if state and federal initiatives do not address potential regulatory, financial, and environmental barriers adequately.

Regulatory barriers

Unreasonable requirements for switching electricity suppliers. Most states adopting retail competition allow today’s utilities to recover most of their investments in power plants and transmission lines that will not survive in a competitive market. These so-called stranded costs are being recovered either through a fee on future electricity sales or a charge to those individuals or businesses exiting the utility’s system. High exit fees, however, would be a significant barrier to independent or onsite generators. In the wake of the MIT case, Massachusetts banned exit fees for firms switching to onsite generators with an efficiency of at least 50 percent. Other states should avoid exit fees that discourage the deployment of energy-efficient and pollution-reducing technologies. They might introduce a sliding scale that exempts new technologies by an amount proportional to their increased efficiency and decreased emissions of nitrogen oxide and sulfur dioxide. States should also resist the efforts of dominant power companies to impose lengthy notice periods before consumers can switch to a different electricity supplier.

Unreasonable requirements for selling to the grid. Dominant power companies also could limit competition by imposing obsolete and prohibitively expensive interconnection standards and metering requirements that have no relation to safety. To prevent that practice, the federal government should develop and regularly update national standards governing electricity interconnections and metering for all electric customers.

Requirements that discourage energy self-sufficiency. Many consumers now have the ability to cost-effectively generate some of their own electricity. However, large electric suppliers could block these potential competitors by penalizing customers who purchase less than all of their electricity from them or by charging excessive rates for backup or supplemental power. In order for all consumers to be able to choose their supplier of power (including backup and supplemental power), tariffs for the use of the distribution grid must be fair and nondiscriminatory. In addition, although some companies can use waste fuel from one plant to generate electricity for several of their other facilities, obsolete prohibitions on private construction of electric wires and other energy infrastructure often prevent such “industrial ecology.” States should follow Colorado’s lead and permit any firm that supplies energy to its own branches or units to construct electric wires and natural gas pipes.

Financial barriers

Tax policies that retard innovation. Depreciation schedules for electricity-generating equipment that are, on average, three times longer than those for similar-sized manufacturing equipment discourage innovation in the electric industry. Such depreciation schedules made sense when a utility monopoly wanted to operate its facilities, whatever the efficiency, for 30 or more years. They make no sense in the emerging competitive market, when rapid turnover of the capital stock will spur efficiency and technological innovation. Electric equipment depreciation therefore should be standardized and made similar to that of comparable industrial equipment.

Monopoly regulation that encourages the inefficient. Because they were able to obtain a return on any investment, utilities had an incentive to build large, expensive, and site-constructed power plants. They also had no reason to retire those plants, even when new generators were more economical, efficient, and environmentally sound. Moreover, monopoly regulation provided no reward to the utilities for energy-efficiency savings. What are needed instead are state and federal actions that advance competitive markets, which will impose incentives to trim fuel use and make better use of the waste heat produced by electric generation.

Environmental barriers

Unrecognized emissions reductions. U.S. environmental regulations are a classic case of a desire for the perfect-zero emissions-being the adversary of the good-lower emissions achieved through higher efficiency. Highly efficient new generators, for instance, are penalized by the Environmental Protection Agency’s (EPA’s) implementation of the Clean Air Act, which fails to recognize that even though a new generator will increase emissions at that site, it will eliminate the need to generate electricity at a facility with a higher rate of emissions, so that the net effect is a significant drop in emissions for the same amount of power generated. In order to reduce emissions overall and encourage competition, the EPA, in collaboration with the states, should instead develop output-based standards that set pollution allowances per unit of heat and electricity. The federal government should measure the life-cycle emissions of all electric-generation technologies on a regular basis. EPA or the states should also provide emissions credits to onsite generators that displace pollutants by producing power more cleanly than does the electric utility.

Subsidy of “grandfathered” power plants. The Clean Air Act of 1970 exempted all existing electric generating plants from the stringent new rules that would shut down a new generator that has excess emissions, even though an old plant producing 20 times as much emissions would be allowed to operate. This perverse policy puts new technologies at a disadvantage, and some analysts worry that deregulation will enable the grandfathered plants, which face reduced environmental control costs, to generate more power and more pollution. Others argue that true competition, in which electric-generating companies are forced to cut costs dramatically, will make inefficient grandfathered plants far less attractive. The bottom line is that the old plants need to be replaced, and federal, state, and local governments should adopt innovative financing programs and streamline the permit process in order to speed the introduction of new facilities.

Lack of a market approach for all emissions. As it did with sulfur dioxide, the federal government should establish a pollution-trading system for all major electricity-related pollutants, including nitrogen oxides and particulates. The system should allow flexibility for emissions/efficiency tradeoffs. It should also gradually reduce the pollution allowances for all traded pollutants on a schedule that is made public well in advance.

Reliance on end-of-pipe environmental controls. One reason why industries neither generate electricity themselves nor use the waste heat for process steam is that current environmental regulations rely on end-of-pipe and top-of-smokestack controls. Such cleansers are expensive and increase electricity use dramatically. A more efficient solution would be for EPA and/or the states to allow process industries to trade electricity-hogging end-of-pipe environmental control technologies for increased efficiency with its accompanying reduction in pollution.

The innovation alternative

The United States is on the verge of the greatest explosion in power system innovation ever seen. The benefits of an innovation-based restructuring strategy for the electric industry will be widespread. Experience elsewhere in the world suggests that ending monopoly regulation will save money for all classes of consumers. In the four years since Australia began its utility deregulation, wholesale electricity prices have fallen 32 percent in real terms. Restructuring will also reduce pollution and improve air quality. The United Kingdom in 1989 began to deregulate electricity generation and sale and to shift from coal to natural gas; six years later, carbon dioxide emissions from power generation had fallen 39 percent and nitrogen oxides 51 percent.

Timing, however, is critical if the United States is to capture such benefits. In the next several years, much of the United States’ aging electrical, mechanical, and thermal infrastructure will need to be replaced. For example, if U.S. industry continues to encounter barriers to replacing industrial boilers with efficient generators such as combined heat and power systems, the country will have lost an opportunity for a massive increase in industrial efficiency.

Maintaining the status quo is no longer an option, in part because the current monopoly-based industry structure has forced Americans to spend far more than they should on outmoded and polluting energy services. If federal and state lawmakers can restructure the electric industry cooperatively, based on market efficiency and principles of consumer choice, they will bring about immense benefits for both the economy and the environment.

Scorched-Earth Fishing

The economic and social consequences of overfishing, along with the indiscriminate killing of other marine animals and the loss of coastal habitats, have stimulated media coverage of problems in the oceans. Attention to marine habitat destruction tends to focus on wetland loss, agricultural runoff, dams, and other onshore activities that are visible and easily photographed. In tropical regions, fishing with coral reef-destroying dynamite or cyanide has been in the news, the latter making it to the front page of the New York Times.

Yet a little-known but pervasive kind of fishing ravages far more marine habitats than any of these more noticeable activities. Bottom trawls-large bag-shaped nets towed over the sea floor-account for more of the world’s catch of fish, shrimp, squid, and other marine animals than any other fishing method. But trawling also disturbs the sea floor more than any other human activity, with increasingly devastating consequences for the world’s fish populations.

Trawling is analagous to strip mining or clear-cutting- except that trawling affects areas that are larger by orders of magnitude.

Trawl nets can be pulled either through mid-water (for catching fish such as herring) or along the bottom with a weighted net (for cod, flounder, or shrimp). In the latter method, a pair of heavy planers called “doors” or a rigid steel beam keeps the mouth of the net stretched open as the boat tows it along, and a weighted line or chain across the bottom of the net’s mouth keeps it on the seabed. Often this “tickler” chain frightens fish or shrimp into rising off the sea bottom; they then fall back into the moving net. Scallopers employ a modified trawl called a dredge, which is a chain bag that plows through the bottom, straining sediment through the mesh while retaining scallops and some other animals.

Until just a few years ago, trawlers were unable to work on rough bottom habitats or those strewn with rubble or boulders without risking hanging up and losing their nets and gear. For animal and plant communities that live on the sea bottom, these areas were thus de facto sanctuaries. Nowadays, every kind of seabed-silt, sand, clay, gravel, cobble, boulder, rock reef, worm reef, mussel bed, seagrass flat, sponge bottom, or coral reef-is vulnerable to trawling. For fishing rough terrain or areas with coral heads, trawlers have since the mid-1980s employed “rockhopper” nets equipped with heavy wheels that roll over obstructions. In addition to the biological problems rockhoppers create, this fishing gear also displaces commercial hook-and-line and trap fishers who formerly worked such sites without degrading the habitat. Wherever they fish and whatever they are catching, bottom trawls churn the upper few inches of the seabed, gouging the bottom and dislodging rocks, shells, and other structures and the creatures that live there.

Ravaging the seabed

Much of the world’s seabed is encrusted and honeycombed with structures built by living things. Trawls crush, kill, expose to enemies, and remove these sources of nourishment and hiding places, making life difficult and dangerous for young fish and lowering the quality of the habitat and its ability to produce abundant fish populations.

Bottom trawling is akin to harvesting corn with bulldozers that scoop up topsoil and cornstalks along with the ears. Trawling commonly affects the top two inches of sediment, which are the habitat of most of the animals that provide shelter and food for the fish, shrimp, and other animals that humans eat. At one Gulf of Maine site that was surveyed before trawling and again after rockhopper gear was used, researchers noted profound changes. Trawling had eliminated much of the mud surface of the site, along with extensive colonies of sponges and other surface-growing organisms. Rocks and boulders had been moved and overturned.

It may be hard to get excited about vanished sponges and overturned rocks. But for the fishing industry-like that in New England, which has lost thousands of jobs and hundreds of millions of dollars in recent years and is suffering the resultant social consequences-habitat changes caused by fishing gear are significant. The simplification of habitat caused by trawling makes the young fish of commercially important species more vulnerable to natural predation. In lab studies of the effects of bottom type on fish predation, the presence of cobbles, as opposed to open sand or gravel-pebble bottoms, extended the time it took for a predatory fish to capture a young cod and allowed more juvenile cod to escape predation.

But virtually the entire Gulf of Maine is raked by nets annually, and New England’s celebrated Georges Bank, the once-premier and now-exhausted fishing ground, is swept three to four times per year. Parts of the North Sea are hit seven times, and along Australia’s Queensland coast, shrimp trawlers plow along the bottom up to eight times annually. A single pass kills 5 to 20 percent of the seafloor animals, so a year’s shrimping can wholly deplete the bottom communities.

More data needed

Considering how commonplace trawling has become in the world’s seas, researchers have completed astonishingly few studies. For example, virtually nothing is known about shrimp trawling’s effects on the Gulf of Mexico’s seabed, although this is one of the world’s most heavily trawled areas. The effects on fish populations and the fishing industry, although probably significant, have been difficult to quantify because there are few unaltered reference sites. But the studies available suggest that the large increases in bottom fishing from the 1960s through the early 1990s are likely to have reduced the productivity of seafloor habitats substantially, exacerbating depletion from overfishing.

Peter Auster and his colleagues at the University of Connecticut’s National Undersea Research Center have found that recent levels of fishing effort on the continental shelves by trawl and dredge gear “may have had profound impacts on the early life history in general, and survivorship in particular, of a variety of species.” At three New England sites, which scientists have studied either within and adjacent to areas closed to bottom trawls or before and after initial impact, trawls significantly reduced cover for juvenile fishes and the bottom community. In northwestern Australia, the proportion of high-value snappers, emperors, and groupers-species that congregate around sponge and soft-coral communities-dropped from about 60 percent of the catch before trawling to 15 percent thereafter, whereas less valuable fish associated with sand bottoms became more abundant.

In temperate areas, biological structures are much more subtle than the spectacular coral reefs of the tropics. A variety of animals, including the young of commercially important fish, mollusks, and crustaceans, rely on cover afforded by shells piled in the troughs of shallow sand ridges caused by storm wave action, depressions created by crabs and lobsters, and the havens provided by worm burrows, amphipod tubes, anemones, sea cucumbers, and small mosslike organisms such as bryozoans and sponges.

Some of these associations are specific: postlarval silver hake gather in the cover of amphipod tubes, young redfish associate with cerrianid tubes, and small squid and scup shelter in depressions made by skates. Newly settled juvenile cod defend territories around a shelter. Studies off Nova Scotia indicate that the survival of juvenile cod is higher in more complex habitats, which offer more shelter from predators. In another study, the density of small shrimp was 13 per square meter outside trawl drag paths and zero in a scallop dredge path.

A general misperception is that small invertebrate marine bottom dwellers are highly fecund and reproduce by means of drifting larvae that can recolonize large areas quickly. In truth, key creatures of the bottom community can disperse over only short distances. Offspring must find suitable habitat in the immediate vicinity of their parents or perish. The seafloor structures that juvenile fish rely on are often small in scale and are easily dispersed or eliminated by bottom trawls. Not only is the cover obliterated, but the organisms that create it are often killed or scattered by the trawls.

Les Watling of the University of Maine (who has studied the effect of mobile fishing gear in situ) and Marine Conservation Biology Institute director Elliott Norse have shown that trawling is analogous to strip mining or clear-cutting-except that trawling affects territories that are larger by orders of magnitude. An area equal to that of all the world’s continental shelves is hit by trawls every 24 months, a rate of habitat alteration variously calculated at between 15 and 150 times that of global deforestation through clear-cutting.

A multinational group of scientists at a workshop Norse convened in 1996

at the University of Maine concluded that bottom trawling is the most important human source of physical disturbance on the world’s continental shelves. Indeed, so few of the shelves are unscarred by trawling that studies comparing trawled and untrawled areas are often difficult to design. The lack of research contributes to the lack of awareness, and this could be one reason why trawling is permitted even in U.S. national marine sanctuaries.

Trawling is not uniformly bad for all species or all bottom habitats. In fact, just as a few species do better in clear-cuts, some marine species do better in trawled than in undisturbed habitats. A flatfish called the dab, for instance, benefits because trawling eliminates its predators and competitors and the trawls’ wakes provide lots of food.

But most species are not helped by trawling, and marine communities can be seriously damaged, sometimes for many decades. Communities that live in shallow sandy habitats subject to storms or natural traumas such as ice scouring tend to be resilient and resist physical disturbances. But deeper communities that seldom experience natural disturbances are more vulnerable and less equipped to recover quickly from trawling. In Watling and Norse’s global review of studies covering various habitats and depths, none showed general increases in species after bottom trawling, one showed that some species increased while others decreased, and four indicated little significant change. But 18 showed serious negative effects, and many of these were done in relatively shallow areas, which generally tend to be more resilient than deeper areas.

Comparing the damage caused by bottom trawling to the clear-cutting of forests is not unreasonable in light of the fact that some bottom organisms providing food or shelter may require extended undisturbed periods to recover. Sponges on New England’s sea floor can be 50 years old. Watling has said that if trawling stopped today, some areas could recover substantially within months, but certain bottom communities may need as much as a century.

Reducing the damage

Humanity’s focus on extracting food from the oceans has effectively blinded fishery managers to the nourishment and shelter that these fish themselves require. If attention were paid instead to conservation of the living diversity on the seabed, fisheries would benefit automatically because the ecosystem’s productivity potential and inherent output and service capacity would remain high. Actions that would simultaneously safeguard the fishing industry as well as the seabed need to be taken now. These measures would include:

  1. No-take replenishment zones where fishing is prohibited. This would help create healthy habitats supplying adjacent areas with catchable fish. Such designations are increasingly common around the world, particularly in certain areas of the tropics, and benefits often appear within a few years. In New England, fish populations are still very low, but they are increasing in areas that the regional fishery management councils and National Marine Fisheries Service have temporarily closed to fishing after the collapse of cod and other important fish populations. The agencies should make some of these closings permanent to permit the areas’ replenishment and allow research on their recovery rates.
  2. Fixed-gear-only zones where trawls and other mobile gear are banned in favor of stationary fishing gear, such as traps or hooks and lines, that doesn’t destroy habitat. New Zealand and Australia have closed areas to bottom trawls. So have some U.S. states, although these closures are usually attempts to protect fish in especially vulnerable areas or to reduce conflicts between trawls and other fishers, not to protect habitat. Temporary closures in federal waters, such as those in New England, should in some cases be made permanent for trawls but opened to relatively benign stationary gear. What gear is permitted should depend on bottom type, with mobile gear allowed more on shallow sandy bottoms that are relatively resistant to disturbance but barred from harder, higher-relief, and deeper bottoms where trawler damage is much more serious.
  3. Incentives for development of fishing gear that does not degrade the very habitat on which the fishing communities ultimately depend. Fish and fisheries have been hurt by perverse subsidies that have encouraged overfishing, overcapacity of fishing boats, and degradation of habitat and marine ecosystems. Intelligently designed financial incentives for encouraging new and more benign technology could tap the inherent inventiveness of fishers in constructive ways.

Patented Genes: An Ethical Appraisal

On May 18, 1995, about 200 religious leaders representing 80 faiths gathered in Washington, D.C., to call for a moratorium on the patenting of genes and genetically engineered creatures. In their “Joint Appeal Against Human and Animal Patenting,” the group stated: “We, the undersigned religious leaders, oppose the patenting of human and animal life forms. We are disturbed by the U.S. Patent Office’s recent decision to patent body parts and several genetically engineered animals. We believe that humans and animals are creations of God, not humans, and as such should not be patented as human inventions.”

Religious leaders, such as Ted Peters of the Center for Theology and Natural Sciences, argue that “patent policy should maintain the distinction between discovery and invention, between what already exists in nature and what human ingenuity creates. The intricacies of nature . . . ought not to be patentable.” Remarks such as this worry the biotech industry, which has come to expect as a result of decisions over two decades by the U.S. Patent and Trademark Office (PTO) and by the courts that genes, cells, and multicellular animals are eligible for patent protection. The industry is concerned because religious leaders have considerable influence and because their point of view is consistent with the longtime legal precedent that products of nature are not patentable.

Representatives of the biotech industry argue that their religious critics fail to understand the purpose of patent law. According to the industry view, patents create temporary legal monopolies to encourage useful advances in knowledge; they have no moral or theological implications. As Biotechnology Industry Organization president Carl Feldbaum noted: “A patent on a gene does not confer ownership of that gene to the patent holder. It only provides temporary legal protections against attempts by other parties to commercialize the patent holder’s discovery or invention.” Lisa Raines, vice president of the Genzyme Corporation, summed up the industry view: “The religious leaders don’t understand perhaps what our goals are. Our goals are not to play God; they are to play doctor.”

The differences between the two groups are not irreconcilable. The religious leaders are not opposed to biotechnology, and the industry has no interest in being declared the Creator of life. The path to common ground must begin with an understanding of the two purposes of patent law.

Double vision

Patent law traditionally has served two distinct purposes. First, it secures to inventors what one might call a natural property right to their inventions. “Justice gives every man a title to the product of his honest industry,” wrote John Locke in his Two Treatises on Civil Government. If invention is an example of industry, then patent law recognizes a preexisting moral right of inventors to own the products they devise, just as copyright recognizes a similar moral right of authors. Religious leaders, who believe that God is the author of nature (even if evolution may have entered the divine plan), take umbrage, therefore, when mortals claim to own what was produced by divine intelligence.

Second, patents serve the utilitarian purpose of encouraging technological progress by offering incentives-temporary commercial monopolies-for useful innovations. One could argue, as the biotech industry does, that these temporary monopolies are not intended to recognize individual genius but to encourage investments that are beneficial to society as a whole. Gene patents, if construed solely as temporary commercial monopolies, may make no moral claims about the provenance or authorship of life.

What industry wants is not to upstage the Creator but to enjoy a legal regime that protects and encourages investment.

Legal practice in the past has avoided a direct conflict between these two purposes of patent policy-one moral, the other instrumental-in part by regarding products of nature as unpatentable because they are not “novel.” For example, an appeals court in 1928 held that the General Electric Company could not patent pure tungsten but only its method for purifying it, because tungsten is not an invention but a “product of nature.” In 1948, the Supreme Court in Funk Brothers Seed Company v. Kalo Inoculant invalidated a patent on a mixture of bacteria that did not occur together in nature. The Court stated that the mere combination of bacterial strains found separately in nature did not constitute “an invention or discovery within the meaning of the patent statutes.” The Court wrote, “Patents cannot issue for the discovery of the phenomena of nature. . . . [They] are part of the storehouse of knowledge of all men. They are manifestations of laws of nature, free to all men and reserved exclusively to none.”

The moral and instrumental purposes of patent law came into conflict earlier in this century when plant breeders, such as Luther Burbank, sought to control the commercial rights to the new varieties they produced. If patents served solely an instrumental purpose to encourage by rewarding useful labor and investment, one might say that patents should issue on the products of the breeder’s art. Yet both the PTO and the courts denied patentability to the mere repackaging of genetic material found in nature because, as the Supreme Court said later about a hybridized bacterium, even if it “may have been the product of skill, it certainly was not the product of invention.”

To put this distinction in Aristotelian terms, breeders provided the efficient cause (that is, the tools or labor needed to bring hybrids into being) but not the formal cause (that is, the design or structure of these varieties). Plant breeders could deposit samples of a hybrid with the patent office, but they could not describe the design or plan by which others could construct a plant variety from simpler materials. The patent statute requires, however, applicants to describe the design “in such full, clear, concise and exact terms as to enable any person skilled in the art to which it pertains . . . to make and use the same.” A breeder could do little more to specify the structure of a new variety than to refer to its ancestor plants and to the methods used to produce it. This would represent no advance in plant science; it would tell others only what they already understood.

Confronted with the inapplicability of intellectual property law to new varieties of plants, Congress enacted the Plant Patent Act of 1930 and the Plant Variety Protection Act of 1970, which protect new varieties against unauthorized asexual and sexual reproduction, respectively. Breeders were required to deposit samples in lieu of providing a description of how to make the plant. Congress thus created commercial monopolies that implied nothing about invention and therefore nothing about moral or intellectual property rights. Accordingly, religious leaders had no reason to object to these laws.

The Court changes everything

This legal understanding concerning products of nature lasted until 1980, when the Supreme Court, by a 5-4 majority, decided in Diamond v. Chakrabarty, that Chakrabarty, a biologist, could patent hybridized bacteria because “his discovery is not nature’s handiwork, but his own.” The court did not intend to reverse the long tradition of decisions that held products of nature not to be patentable. The majority opinion reiterated that “a new mineral discovered in the earth or a new plant discovered in the wild is not patentable subject matter.” The majority apparently believed that the microorganisms Chakrabarty wished to patent were not naturally occurring but resulted from “human ingenuity and research.” The plaintiffs’ lawyers failed to disabuse the court of this mistaken impression because they focused on the potential hazards of engineered organisms, a matter (as the Court held) that is irrelevant to their patentability.

Although Chakrabarty’s patent disclosure, in its first sentence, claims that the microorganisms were “developed by the application of genetic engineering techniques,” Chakrabarty had simply cultured different strains of bacteria together in the belief that they would exchange genetic material in a laboratory “soup” just as they do in nature. Chakrabarty himself was amazed at the Court’s decision, since he had used commonplace methods that also occur naturally to exchange genetic material between bacteria. “I simply shuffled genes, changing bacteria that already existed,” Chakrabarty told People magazine. “It’s like teaching your pet cat a few new tricks.”

The Chakrabarty decision emboldened the biotechnology industry to argue that patents should issue on genes, proteins, and other materials that had commercial value. In congressional hearings on the Biotechnology Competitiveness Act (which passed in the Senate in 1988), witnesses testified that the United States was locked in a “global race against time to assure our eminence in biotechnology”; a race in which the PTO had an important role to play.

While Congress was debating the issue, the PTO was already implementing a major change in policy. It began routinely issuing patents on products of nature (or functional equivalents), including genes, gene fragments and sequences, cell lines, human proteins, and other naturally occurring compounds. For example, in 1987, Genetics Institute, Inc., received a patent on human erythropoietin (EPO), a protein consisting of 165 amino acids that stimulate the production of red blood cells. Genetics Institute did not claim in any sense to have invented EPO; it had extracted a tiny amount of the naturally occurring polymer from thousands of gallons of urine. Similarly, Scripps Clinic patented a clotting agent, human factor VIII:C, a sample of which it had extracted from human blood.

Harvard University acquired a patent on glycoprotein 120 antigen (GP120), a naturally occurring protein on the coat of the human immunodeficiency virus. A human T cell antigen receptor has also been patented. Firms have received patents for hundreds of genes and gene fragments; they have applied for patents for thousands more. With few exceptions, the products of nature for which patents issued were not changed, redesigned, or improved to make them more useful. Indeed, the utility of these proteins, genes, and cells typically depends on their functional equivalence with naturally occurring substances. Organisms produced by conventional breeding techniques also now routinely receive conventional patents, even though they may exhibit no more inventive conception or design than those Burbank bred. The distinction between products of skill and of invention, which was once sufficient to keep breeders from obtaining ordinary patents, no longer matters in PTO policy. Invention is no longer required; utility is everything.

The search for common ground

Opponents of patents on genetic materials generally support the progress of biotechnology. At a press conference, religious leaders critical of patenting “the intricacies of nature” emphasized that they did not object to genetic engineering; indeed, they applauded the work of the biotech industry. Bishop Kenneth Carder of the United Meth-odist Church said, ”What we are objecting to is the ownership of the gene, not the pro-cess by which it is used.” In a speech delivered to the Pontifical Acad-emy of Sciences in 1994, Pope John Paul II hailed progress in genetic science and tech-nology. Nevertheless, the Pope said: “We rejoice that numerous researchers have refused to allow discoveries made about the genome to be patented. Since the human body is not an object that can be disposed of at will, the results of research should be made available to the whole scientific community and cannot be the property of a small group.”

Industry representatives and others who support gene patenting may respond to their religious critics in either of two ways. First, they may reply that replicated complementary DNA (cDNA) sequences, transgenic plants and animals, purified proteins, and other products of biotech-nology would not exist without human intervention in nature. Hence they are novel inventions, not identical to God’s creations. Second, industry representatives may claim that the distinction between “invention” and “discovery” is no longer relevant to patent policy, if it ever was. They may concede, then, that genetic materials are products of nature but argue that these discoveries are patentable compositions of matter nonetheless.

Consider the assertion that genes, gene sequences, and living things, if they are at all altered by human agency, are novel organisms and therefore not products of nature. This defense of gene patenting would encounter several difficulties. First, patents have issued on completely unaltered biological materials such as GP120. Second, the differ-ences between the patented and the natural substance, where there are any, are unlikely to affect its utility. Rather, the value or usefulness of the biological product often depends on its functional identity to or equivalence with the natural product and not on any difference that can be ascribed to human design, ingenuity, or invention. Third, the techniques such as cDNA replication and the immortalization of cell lines by which biological material is gathered and reproduced have become routine and obvious. The result of employing these techniques, therefore, might be the product of skill, but not of invention.

Proponents of gene patenting might concede that genes, proteins, and other patented materials are indeed products of nature. They may argue with Carl Feldbaum that this concession is irrelevant, however, because patents “confer commercial rights, not ownership.” From this perspective, which patent lawyers generally endorse, patenting makes no moral claim to invention, design, or authorship but only creates a legal monopoly to serve commercial purposes. Ownership remains with God. Accordingly, gene patents carry no greater moral implications than do the temporary monopolies plant breeders enjoy in the results of their investment and research.

Although this reply may be entirely consistent with current PTO policy, legal and cultural assumptions for centuries have associated patents with invention and therefore with the ownership of intellectual property. These assumptions cannot be dismissed. First, patents confer the three defining incidents of ownership: the right to sell, the right to use, and the right to exclude. If someone produced and used, say, human EPO, it would be a violation of the Genetic Institute patent. But all human beings produce EPO as well as other patented proteins in our bodies. Does this mean we are infringing a patent? Of course not. But why not, when producing and using the same protein outside our bodies does infringe the patent? If a biotech firm patents a naturally occurring chemical compound for pesticidal use, does that mean that indigenous people who have used that chemical for centuries will no longer be allowed to extract and use it? That such questions arise suggests that patents confer real ownership of products of nature, not just abstract commercial rights.

Second, intuitive ties founded in legal and cultural history connect patents with the moral claim to intellectual property. For centuries the PTO followed the Supreme Court in insisting that “a product must be more than new and useful to be patented; it must also satisfy the requirements of invention.” The requirements of invention included a contri-bution to useful knowledge-some display of ingenuity for which the inventor might take credit. By disclosing this new knowledge (rather than keeping it a trade secret), the inventor would contribute to and thus repay the store of knowledge on which he drew. One simply cannot scoff, as industry representatives sometimes do, at a centuries-long tradition of legal and cultural history, enshrined in every relevant Supreme Court decision, that connects intellectual property with moral claims based on contributions to knowledge.

Religious leaders who decry current PTO policy in granting intellectual property rights to products of nature have suggested alternative ways to give the biotech industry the kinds of commercial protections it seeks. Rabbi David Saperstein, director of the Religious Action Center of Reform Judaism in Washington, D.C., has proposed that ways be found “through contract laws and licensing procedures to protect the economic investment that people make . . .” On the industry side, spokespersons have been eager to assure their clerical critics that they do not want to portray themselves as the authors of life. What industry wants, they argue, is not to upstage the Creator but to enjoy a legal regime that protects and encourages investment. Industry is concerned with utility and practical results; religious and other critics are understandably upset by the moral implications of current PTO policy.

It is not hard to see the outlines of a compromise. If Congress enacts a Genetic Patenting Act that removes the “description requirement” for genetic materials, as it has removed this requirement for hybridized plants, patents conferred on these materials may carry no implications about intellectual authorship. Such a statute, explicitly denying that biotech firms have invented or designed products of nature, might base gene patenting wholly on instrumental grounds and thus meet the objections of religious leaders.

A new statutory framework could accommodate all these concerns if it provided the kinds of monopoly commercial rights industry seeks without creating the implication or connotation that industry “invents,” “designs,” or “owns” genes as intellectual property. In other words, some middle ground modeled on the earlier plant protection acts might achieve a broad agreement among the parties now locked in dispute.

Extending Manufacturing Extension

At the start of this decade, U.S. efforts to help smaller manufacturers use technology were patchy and poorly funded. A handful of states ran industrial extension programs to aid companies in upgrading their technologies and business practices, and a few federal centers were also getting underway. Eight years later the picture has changed considerably. Seventy-five programs are now operating across the country under the aegis of a national network known as the Manufacturing Extension Partnership (MEP). This network has not only garnered broad industrial and political endorsement but has also pioneered a collaborative management style, bringing together complementary service providers to offer locally managed, demand-driven services to small manufacturers. That approach contrasts markedly with the fragmented “technology-push” style of previous federal efforts. Most important, early evidence indicates that the MEP is helping companies become more competitive. But to exert an even more profound impact, the MEP needs to pursue a strategic, long-term approach to ensuring the vitality of small manufacturers.

When proponents advanced ideas in the late 1980s for a national system of manufacturing extension, U.S. firms were facing stiff new competition from other countries. A wrenching decade of restructuring followed by strong domestic growth has boosted the competitive position of the U.S. economy. Yet most of the gains in U.S. manufacturing performance have occurred among larger companies with the resources to reengineer their industrial processes, introduce new technologies and quality methods, and transform their business practices. The majority of small firms lag in productivity growth and in adopting improved technologies and techniques. Indeed, in recent years, per-employee value-added and wages in small U.S. manufacturers have fallen increasingly behind the levels attained in larger units.

Industrial extension focuses mainly on these small manufacturers. There are some 380,000 industrial companies in the United States with fewer than 500 employees. Small manufacturers frequently lack information, expertise, time, money, and confidence to upgrade their manufacturing operations, resulting in under-investment in more productive technologies and missed opportunities to improve product performance, workforce training, quality, and waste reduction. Private consultants, equipment vendors, universities, and other assistance sources often overlook or cannot economically serve the needs of smaller firms. System-level factors, such as the lack of standardization, regulatory impediments, weaknesses in financial mechanisms, and poorly organized inter-firm relationships, also constrain the pace of technological diffusion and investment.

The MEP addresses these problems by organizing networks of public and private service providers that have the resources, capabilities, and linkages to serve smaller companies. Manufacturing extension centers typically employ industrially experienced field personnel who work directly with firms to identify needs, broker resources, and develop appropriate assistance projects. Other services are also offered, including information provision, technology demonstration, training, and referrals. Given the economy-wide benefits of accelerating the deployment of technology and the difficulties many companies face in independently implementing technological upgrades, the MEP is a classic example of how collective public action in partnership with the private sector can make markets and the technology diffusion process more efficient. For example, rather than competing with private contractors, as some critics feared, the MEP helps companies use private consultants more effectively and encourages firms to implement their recommendations.

The federal effort began when the 1988 Trade and Competitiveness Act authorized the Department of Commerce’s National Institute of Standards and Technology (NIST) to form regional manufacturing technology centers. The first few years brought just a small increase in federal support; only with the Clinton administration’s pledge to build a national system did the MEP take off. Under a competitive process managed by NIST, resources from the Technology Reinvestment Project–the administration’s defense conversion initiative–and the Commerce Department became available. The states had to provide matching funds, with private industry revenues expected as well. Existing state manufacturing extension programs were expanded and new centers were established so that, by 1997, the MEP achieved coverage in all fifty states. In FY97, state monies plus fees from firms using MEP services matched some $95 million in federal funding. Congress has endorsed a federal budget of about $112 million for the MEP in FY98–more than a sixfold increase over the 1993 allocation.

MEP centers directly operate more than 300 local offices and work with more than 2,500 affiliated public and private organizations, including technology and business assistance centers, economic development groups, universities and community colleges, private consultants, utilities, federal laboratories, and industry associations. Through this network, the MEP services reach almost 30,000 firms a year. (Some two-thirds of these companies have fewer than 100 employees.) The program is decentralized and flexible: Individual centers develop strategies and services appropriate to state and local conditions. For example, the Michigan Manufacturing Technology Center specializes in working with companies in the state’s automotive, machine tool and office furniture industries. Similarly, the Chicago Manufacturing Center has developed resources to address the environmental problems facing the city’s many small metal finishers.

Originally, Congress envisaged that NIST’s manufacturing centers would transfer advanced cutting-edge technology developed under federal sponsorship to small firms. But MEP staff soon realized that small companies mostly need help with more pragmatic and commercially proven technologies; these firms often also needed assistance with manufacturing operations, workforce training, business management, finance, and marketing to get the most from existing and newly introduced technologies. Most MEP centers now address customers’ training and business needs as well as promote technology. In general, centers have found that staff and consultants with private-sector industrial experience are better able than laboratory researchers to deliver such services.

Most manufacturing extension projects result in small but useful incremental improvements within firms. But in some cases, much larger results have been produced. A long-established pump manufacturer with nearly 130 employees was assisted by the Iowa Manufacturing Technology Center to gain an international quality certification; subsequently, the company won hundreds of thousands of dollars in new export sales. In western New York, a 14-employee machine shop struggled with a factory floor that was cluttered with machinery, scrap, and work in progress. The local MEP affiliate conducted a computer-aided redesign of the shop floor layout and recommended improved operational procedures, resulting in major cost savings, faster deliveries, freed management time, and increased sales for the company. In Massachusetts, manufacturing extension agents helped a 60-employee manufacturer of extruded aluminum parts address productivity, production scheduling, training, and marketing problems at its 50-year old plant. The company gives credit to MEP assistance for tens of thousands of dollars of savings through set-up time reductions, more timely delivery, and increased sales.

Systematic evaluation studies have confirmed that the MEP is having a positive effect on businesses and the economy. For example, in a 1995 General Accounting Office survey of manufacturing extension customers, nearly three-quarters of responding firms said that improvements in their overall business performance had resulted. Evaluations of the Georgia Manufacturing Extension Alliance reveal that one year after service, 68 percent of participating firms act on project recommendations, with more than 40 percent of firms reporting reduced costs, 32 percent reporting improved quality, and 28 percent making a capital investment. A benefit-cost study of projects completed by the Georgia program found combined net public and private economic benefits exceeded costs by a ratio of 1.2:1 to 2.7:1. A Michigan study using seventeen key technology and business performance metrics found that manufacturing technology center customers improve faster overall than comparable firms in a control group that did not receive assistance. A 1996 study of New York’s Industrial Extension Service (an affiliate of the MEP) also found that the business performance of assisted firms was improved when compared with similar companies that did not receive assistance. Finally, a recent Census Bureau analysis indicates that industrial extension assisted firms have higher productivity growth than non-assisted companies, even after controlling for the performance of firms prior to program intervention.

Challenges and issues

The MEP has achieved national coverage and established local service partnerships; most important, the early evidence indicates that MEP services are leading to desired business and economic goals. However, now that the MEP has completed its start-up phase, several challenges and issues need to be addressed to enable the program to optimize the network it has established and to improve the effectiveness of manufacturing extension services in coming years.

Strategic Orientation. Although MEP affiliates are helping firms become leaner and more efficient, lower costs and higher efficiency are only part of a strategic approach to manufacturing and technology-based economic development. A continuing concern is that although the number of small manufacturing firms in the United States is growing, their average wages have lagged those of larger companies. Part of the problem is that many small companies produce routine commodity products with relatively low added value that are subject to intense international competition. If these firms are to offer higher wages, they must not only become more productive but also find ways to become more distinctive, responsive, and specialized. These capabilities may be promoted by deploying more advanced manufacturing processes, initiating proactive business strategies, forming collaborative relationships with other companies, or developing new products. But to help small firms move in these directions, the MEP will need to adjust its service mix to offer assistance that goes well beyond short-term problem solving for individual firms.

For instance, to help more small firms to develop and sell higher value products in domestic or export markets, the MEP should increase services that focus on new product design and development, and develop even stronger links to R&D centers and financing and marketing specialists. Already under way is a “supply-chain” initiative that aims to upgrade suppliers of firms or industries that are located across state boundaries. The MEP should do more along such lines by supporting initiatives that help suppliers and buyers talk to one another. The MEP has sponsored pilot projects to offer specialized expertise in crosscutting fields such as pollution control or electronic commerce. Again, such efforts should be expanded to stimulate the adoption of emerging technologies and practices, such as those involved with environmentally conscious manufacturing methods, the exploitation of new materials, and the use of new communication technologies. These efforts should be coupled with a greater emphasis on promoting local networks of small firms to speed the dissemination of information and encourage collaborative problem solving, technology absorption, training, product development, and marketing.

The Least-Cost Way to Control Climate Change

In December 1997 in Kyoto, Japan, representatives of 159 countries agreed to a protocol to limit the world’s emissions of greenhouse gases. Now comes the hard part: how to achieve the reductions. Emissions trading offers a golden opportunity for a company or country to comply with emissions limits at the lowest possible cost.

Trading allows a company or country that reduces emissions below its preset limit to trade its additional reduction to another company or country whose emissions exceed its limit. It gives companies the flexibility to choose which pollution reduction approach and technology to implement, allowing them to lessen emissions at the least cost. And by harnessing market forces, it leads to innovation and investment. The system encourages swift implementation of the most efficient reductions nationally and internationally; provides economic benefit to those that aggressively reduce emissions; and gives emitters an economically viable way to meet their limits, leading to worldwide efficiency in slowing global warming.

The design of a U.S. cap-and-trade program should follow the basic features of the highly successful Acid Rain Program.

Benefits to the United States from emissions trading would most likely be achieved domestically. However, trading between developed nations and between developed and developing nations has much to offer. It can accelerate investment in developing countries. And it gives developed countries the flexible instruments they say they need to garner the political support necessary to agree to large emissions reductions. In a recent speech in Congress, Sen. Robert Byrd (D-W. Va.) stated that, “reducing projected emissions by a national figure of one-third does not seem plausible without a robust emissions-trading and joint-implementation framework.”

If effective trading systems are to be designed, tough political and technical issues will need to be addressed at the Conference of the Parties in Buenos Aires in November 1998-the next big meeting of the nations involved in the Kyoto Protocol. This is especially true for international trading, because different nations have significantly different approaches to reducing greenhouse gases and because many developing countries are opposed to the very notion of trading. However, if trading systems can be worked out, the United States and the world could meet emissions commitments at the lowest possible cost.

The challenge

The Kyoto Protocol requires developed countries to reduce greenhouse gas (GHG) emissions to an average of 5 percent below 1990 levels in the years from 2008 to 2012. The United States has agreed to cut emissions by 7 percent below its 1990 level. Russia and other emerging economies have somewhat lesser burdens. However, estimates indicate that at current growth rates, the United States would be almost 30 percent above its 1990 baseline for GHG emissions by 2010. Most emissions come from the combustion of fossil fuels. Carbon dioxide is responsible for 86 percent of U.S. emissions, methane for 10 percent, and other gases for 4 percent. Substantial reductions will be needed.

One strategy would be a tax on the carbon content of fuels, which determines the amount of GHGs emitted when a fuel is burned. Although this may be the most efficient way to reduce GHG emissions, it is politically unrealistic in the United States. Our domestic strategy is more likely to be a choice between a trading system linked with a cap on overall emissions and the more traditional approach of setting emission standards for each sector of the economy.

The strategy in other countries may be different. During the Kyoto debates, a sharp difference was evident between the United States, which favored a trading approach to achieving national emissions targets, and European nations, which are contemplating higher taxes as well as command-and-control strategies such as fuel-efficiency requirements for vehicles and mandated pollution controls for utilities and industry. Nonetheless, all countries can still benefit from international trading.

Why trading can work

An emissions trading system allows emitters with differing costs of pollution reduction to trade pollution allowances or credits among themselves. Through trading, a market price emerges that reflects the marginal costs of emissions reduction. If transaction costs are low, trading leads to overall efficiency in meeting pollution goals, because each source can decide whether it is cheaper to reduce its own emissions or acquire allowances from others.

Trading creates benefits by providing flexibility in technology choices both within and between firms. For example, consider an electric utility that burns coal in its boilers. To comply with its emissions limit, it could add costly scrubbers to its smokestacks or it could buy allowances to tide it over until it is ready to invest in much more efficient capital equipment. The latter option often results in lower or no long-term costs when savings from the new technology and avoidance of the costly quick fix are figured in. It also creates the potential for greater long-term pollution reductions. By not spending money on the quick fix, the utility has more capital to invest in more efficient future processes. This point is critical, because reductions beyond those prescribed in the Kyoto Protocol will be needed in the years after 2010 to stabilize global warming for the rest of the 21st century.

If full trading between all countries were allowed, the costs of complying with the Kyoto Protocol would fall dramatically.

Some political and environmental groups oppose trading, equating it to selling rights to pollute. But this view fails to recognize the substantial differences in business processes and technologies, which may allow one source to reduce emissions much more cheaply than another. It also undervalues the importance of timing in investment decisions; the ability to buy a few years of time through trading may allow companies to install improved equipment or make more significant process changes. Trading leads to the firms with the lowest cost of compliance making the most reductions, creating the most cost-efficient system of meeting pollution goals.

Trading is also denigrated by those who say it can create emissions hot spots that result in local health problems. But GHGs have no local effects on human health or ecosystems; they are only problematic at their global concentration levels in the upper atmosphere.

Why a cap is needed

There are two prevailing emissions trading approaches: an emissions cap and allowance system and an open-market system. The cap-and-trade system establishes a hard cap on total emissions, say for a country, and allocates allowances to each emitter that represent its share of the total emissions. Sources could either emit precisely the amount of allowances they are issued, emit fewer tons and sell the difference or store (bank) it for future use, or purchase allowances in order to emit more than their initial allotment. Allowances are freely traded under a private system, much as a stock market operates. A great deal of up-front work must be done to establish baselines for the emitters and to put a trading process in place, but once that work is completed, trades can take place freely between emitters. No regulatory approval is needed. Environmental compliance is ensured because each emitter must have enough allowances to equal its emissions limit each year.

The beauty of the cap-and-trade system is an elegant separation of roles. The government exerts control in setting the cap and monitoring compliance, but decisions about compliance technology and investment choices are left to the private sector.

The best example of such a system in found in the U.S. Acid Rain Program. It has been remarkably effective. An analysis by the Government Accounting Office shows that this cap-and-trade system, created in 1990 to halve emissions of sulfur dioxide by utilities, cut costs to half of what was expected under the previous rate-based standard and well below industry and government estimates. What’s more, recent research at MIT indicates that a third of all utilities complied in 1995 at a profit. This happened because there were unforeseen cost savings in switching from high-cost scrubbers to burning low-sulfur coal, and because trading enabled a utility to transfer allowances between its own units, allowing it to use low-emitting plants to meet base loads and high-emitting plants only at peak demand periods.

The aversion toward trading expressed by many developing countries ignores the many benefits that could accrue to them.

The open-market trading system works differently. Generally, there is no cap. Regulators set limits for each GHG coming from each source of emissions-say, for carbon dioxide from the smokestacks of an electric utility. Therefore, whenever two emitters want to trade, they must get regulatory approval. Although the up-front work may be less than that required for a cap-and-trade system, the need for approval of each trade makes transaction costs high. Also, there is always uncertainty about whether a trade will be approved, and approvals can take weeks or months, all of which reduce the incentive to trade and create an inefficient system.

The most recent results from the U.S. Acid Rain Program show that transaction costs are about 1.5 percent of the value traded, which is about the same as those for trades in a stock market. Transaction costs for open-market trading are an order of magnitude or more higher. Not surprisingly, the results of open-market trading in several U.S. states to reduce emissions of carbon monoxide, nitrogen oxides, and volatile organic compounds have been generally disappointing.

An emissions cap-and-trade system would reduce GHGs within the United States at very low cost. Trading between developed countries and between developed and developing countries could help nations meet their Kyoto Protocol targets, too. Let’s consider what is needed for each system.

Trading at home

The protocol allows a country to use whatever means it wants to achieve its own limit, so there is no restriction on creating a good cap-and-trade system within the United States. The first step would be to allocate the U.S. allotment of carbon emissions among emitters. Emissions come from several major sectors: electricity generation contributes 35 percent; transportation, 31 percent; general industry, 21 percent; and residential and commercial sources, 11 percent. However, because large sources are responsible for most GHGs, the United States could capture between 60 and 98 percent of emissions by including only a few thousand companies in the system.

Possibly the biggest cap-and-trade question for the United States is whom to regulate. The most efficient system would be to impose limits on carbon fuel providers-the coal, oil, and gas industries. These fuels account for up to 98 percent of carbon emissions. Industry groups are concerned, however, that regulating fuel providers is tantamount to a quota on fossil fuels, although similar reductions in fossil fuels would be required by any GHG regulation.

The alternative is to impose limits on fuel consumers-utilities, manufacturers, automobiles, and residential and commercial establishments. This method is less efficient, covering 60 to 80 percent of emissions, because it cannot practically handle the thousands of small industrial or commercial firms, not to mention residences, and because it does not provide incentives to reduce vehicle miles traveled. These inefficiencies will lead to higher overall costs and less burden-sharing.

However, political considerations will be as important as technical ones in choosing whom to regulate, and a hybrid system is possible. The most likely hybrid would be direct regulation of electric utilities and industrial boilers, capturing most of the country’s combustion of coal and natural gas. A fuel-provider system would then be used to regulate sales of petroleum products and fossil fuels to residential and commercial markets. This may be politically expedient and could be almost as efficient as a pure fuel-provider model.

The design of the cap-and-trade program should follow the basic features of the U.S. Acid Rain Program. That program creates a gold standard with three key elements: a fixed emissions cap, free trading and banking of allowances, and strict monitoring and penalty provisions.

Several added benefits could be incorporated. First, the cost of continuous emissions monitoring could be reduced because emissions of carbon dioxide are very accurately measured by the carbon content of fuel. Second, the system could allow trading between gases. This could spur significant reductions of methane, which contributes 10 percent of the warming potential of U.S. emissions. A methane molecule has 21 times the warming potential of a carbon dioxide molecule, and certain sources of methane-landfills, coal mines, and natural gas extraction and transportation systems-could be included. Methane control can be low-cost or even profitable, because the captured methane can be sold; thus, trading between carbon dioxide and methane sources could be a cheap way to reduce the U.S. contribution to global warming.

A third design option would hinge on whether to allocate allowances to existing emitters for free or to auction them. Allocating allowances, as in the acid rain program, is the most politically expedient option, but burdens later entrants, who must buy allowances from others who have already received them. Auctioning allowances would make them available to all and could have a dual benefit if the monies are used to reduce employment taxes or spur investment.

The U.S. Acid Rain Program’s cap-and-trade system has cut the cost of sulfur dioxide compliance to $100 per ton of abated emissions, compared to initial industry estimates of $700 to $1,000 per ton and Environmental Protection Agency (EPA) estimates of $400 per ton. The same kind of cost reductions can be expected in a GHG system. The National Academy of Sciences has estimated that the United States could reduce 25 percent of its carbon emissions at a profit and 25 percent at very low or no cost, because of the hundreds of opportunities to achieve energy efficiency or switch fuels in our economy. Examples given by the Academy include switching from coal to natural gas in electricity generation, improving vehicle fuel economy, and creating energy-efficient buildings. The low net costs of GHG abatement would be further enhanced by the Clinton administration’s recent proposal to speed the development of efficient high-end technologies.

As the world’s largest emitter of GHGs, the United States should begin to implement a cap-and-trade system now. Market signals need to be sent right away to start our economy moving toward a less carbon-intensive development path. To prompt action, EPA should set an intermediate cap, perhaps for the year 2005, because the Kyoto Protocol requires countries to show some form of “significant progress” by that year.

Trading between developed countries

International emissions trading could contribute substantially to curbing many nations’ cost of compliance with the Kyoto Protocol. An assessment by the Clinton administration concluded that compliance costs could fall from $80 per ton of carbon to $10 to $20 per ton if full trading between all countries were allowed. A more realistic analysis done by the World Resources Institute examined 16 leading economic models and concluded that overall costs are much lower, but that international trading could still reduce the cost by around 1 percent of gross national product over a 20-year period.

Rules for trading between nations must begin to be drawn at the Conference of the Parties this November. However, there are key contentious issues, such as how to ensure the high credibility of trades through good compliance and monitoring systems and how to create a privately run system in which transactions can be made in minutes, not the months or even years required for government approval mechanisms. Whether transaction costs are high or low will probably determine the success of international trading.

Article 17 of the Kyoto Protocol authorizes emissions trading between countries listed in the protocol’s Annex B, which currently includes all industrialized countries. It is, however, short on details (it contains only three sentences). It will be up to the Conference of the Parties to define the rules, notably those for emissions reporting and verification and enforcement of violations penalties. It is critical that the Conference properly design rules that create a system that allows private trading with its low transaction costs. This may be difficult because of the lack of definition in the protocol and differing positions within the international community.

Key issues to be resolved include the following:

Trading by private entities. Article 17 makes no reference to it, but trading by private entities is fundamental. Requiring government approval for each trade creates such uncertainty, high transaction costs, and delays that the benefits of trading are substantially lost.

Monitoring and enforcement. High-quality monitoring and compliance systems are essential. At a minimum, this means accurate monitoring, credible government data collection and enforcement, and stiff penalties for noncompliance. In the United States, an early emissions trading system adopted to phase out leaded gasoline in the late 1980s experienced significant violations and enforcement actions until EPA tightened the rules. In the U.S. Acid Rain Program, high-quality monitoring, a public Allowance Tracking System, and steep penalties have led to 100 percent compliance-a remarkable achievement.

Compatibility of trading systems. Developed countries may adopt a wide variety of domestic strategies for achieving their GHG targets. Emissions trading would be facilitated if each were to adopt the cap-and-trade approach, but perhaps only the United States will do so. If other countries pursue other avenues, they could only participate in international trading through an open-market trading system, which involves substantial transaction costs. To ensure the least regulation and lowest cost for all, other countries should adopt the cap-and-trade model.

The “hot air” issue. The economic collapse of the former Soviet republics means that many central and eastern European countries are expected to be approximately 150 million tons below their GHG limits each year during the 2008-2012 commitment period. The protocol allows them to trade these “hot air” tons, even though they would never have been emitted. Trading for these tons could reduce other developed countries’ compliance obligations by an average of 3 percent, essentially raising the GHG cap. This issue muddies the waters because it mixes concerns about the overall cap with the issue of trading. Although trading should be allowed to function freely, it is unfortunate that the protocol allows the inclusion of these non-emissions.

The United States should also review two other trading-related provisions. Article 4 allows several developed countries to jointly fulfill their aggregate commitment to reduce GHG emissions. Although this umbrella approach is a potentially attractive vehicle for trading, its conditions are oriented toward the specific situation of the European Union. One major drawback is that the provision requires each country’s commitment to be established up front, which would restrain the operation of a more flexible market.

Article 6 authorizes a system of joint implementation among developed countries. Joint implementation differs from emissions trading because it requires that any emissions reduction done for trading be “additional to any that would otherwise occur.” This is a difficult case for any country to prove and requires even more oversight of each trade than the open-market approach. Such high transaction costs are likely to make this provision of little use, unless there is a failure to agree on good rules for regular emissions trading under Article 17.

Trading with developing countries

Trading between developed and developing countries has been hotly debated throughout the treaty process. For a developed country, the appeal is that investments made in developing countries, which are generally very energy-inefficient, can result in emissions reductions at very low cost, making allowances available. For a developing country, trading could be attractive because its sale of allowances could generate capital for projects that help it shift to a more prosperous but less carbon-intensive economy.

However, most developing countries, led by China and India, are opposed to trading. First, they simply distrust the motives of developed nations. Second, they rightly point out that the developed world has created the global warming problem and should therefore clean it up. Although legitimate, this second view ignores the many benefits that trading can bring to developing countries.

Many nongovernmental organizations (NGOs) are also wary of trading, claiming that the availability of allowances from developing countries will allow developed countries to avoid having to reduce their own emissions. This is unlikely, however. The United States, for example, will have to reduce its emissions by 37 percent by 2010 to reach its target. Developing countries that are willing to trade will simply not be able to accumulate enough tons to offset this large reduction. Indeed, trading with developing countries is likely to account for at most 10 to 20 percent of the reductions needed by a developed country.

Another major problem in trying to trade with developing countries lies in the weak emissions monitoring and compliance systems currently in place in many of them. Strengthening the basic institutional and judicial framework for environmental law may be necessary in many countries and could take considerable investment and many years. The protocol authorizes two possible ways for a developing country to participate in trading: emissions reduction projects under a provision called the Clean Development Mechanism (CDM) or regular emissions trading under Article 17. The choice depends on whether a developing country makes a specific emissions reduction commitment.

Without such a commitment, a developing country can trade only under the CDM, which is vaguely defined. Depending on decisions made at the Conference of the Parties, the CDM could be anything from a ponderous multilateral government organization whose bureaucracy would dilute any advantage of trading, to a certifying entity that creates a private system for approving trades. This second mechanism would be consistent with the kind of private trading system needed. Useful models for it may be found in the certifying mechanisms of the International Standards Organization or the Forest Stewardship Council.

Attention must be paid to reducing the high transaction costs of CDM trade, however. For a country’s project to qualify under the CDM, the emissions reduction must be “additional to what would have otherwise occurred.” Would a project to switch a utility from coal to natural gas combustion have been pursued anyway? Would a forest protected under a project have survived anyway? This is difficult to ascertain, as demonstrated by an existing pilot program for “activities implemented jointly,” approved by the first Conference of the Parties in 1995. The program addresses the “additionality” issue by requiring extensive review of each trade by the approving governments. This process relies on subjective prediction and can take on average one to two years, leading to very high transaction costs. Future improvements could include privatizing the verification system, standardizing predictive models, and perhaps discounting trades to adjust for uncertainties.

In addition to these difficulties, significant investment under the CDM is unlikely until rules for governing it are approved. This must await the first implementation meeting of the parties to the protocol, which cannot happen until the Kyoto Protocol has been ratified and the parties meet, which would be 2002 at the earliest.

Alternatively, Article 17 allows a developing country to participate fully in trading, with no requirement to show that reductions are additional, if it subscribes to an emissions reduction obligation that is adopted by the Conference of the Parties under Annex B. Because such a commitment for a developing country is likely to be generous, countries making a serious commitment to reductions, such as Costa Rica with its carbon-free energy goal, might well profit from trading.

One approach would be to set the commitment based on the growth baseline concept put forward by the Center for Clean Air Policy. This requires a commitment to reduce the carbon intensity of a country’s economy, which could allow for reasonable growth of emissions while setting firm benchmarks. In this approach, developing countries would not only benefit economically from emissions trading but would take on the kinds of solid commitments needed to achieve the goals of the convention and facilitate ratification of the protocol by the developed countries.

An effective cap-and-trade system implemented within the United States would allow this country to comply with the GHG reductions it has committed to in the Kyoto Protocol at low or no cost. Because the system is a market instrument, it can rapidly bring about the adaptation, innovation, and investment needed to reduce emissions.

International trading can contribute substantially to achieving cost reductions, particularly if a cap-and-trade model with private trading mechanisms can be built into the protocol. Although such a system is unlikely to be fully mapped out at the Buenos Aires meeting in November, the first critical steps must be taken there.