Fall 1998 Update

Missile defense

In “Star Wars Redux” (Issues, Winter 1994-95), I discussed U.S. plans to develop and deploy highly capable defenses against theater (or tactical) ballistic missiles with ranges up to 3,500 kilometers. I argued that large-scale deployment of theater missile defense (TMD) systems could eventually undermine the confidence that the United States and Russia have in the effectiveness of their strategic nuclear retaliatory forces. I also argued that in the mid-term, TMD deployments could interfere with negotiations to further reduce nuclear arsenals.

In September 1997, after four years of negotiations in Geneva, the United States and Russia established a “demarcation” line between TMD systems, which are not limited by the 1972 ABM Treaty, and national missile defense (NMD) systems, which are restricted by the treaty to 100 interceptors for each side. Although Russia sought explicit constraints on the capabilities of TMD systems, the two countries did not set any direct limitations on TMD interceptor performance [the limits are only on the range (3,500 kilometers) and speed (5 kilometers per second) of target vehicles] or impose any other restrictions on TMD development or deployment. The sides did agree, however, to ban space-based interceptor missiles and space-based components based on other physical principles (such as lasers) that are capable of substituting for interceptor missiles. The United States and Russia left to each side the responsibility for determining whether its own higher-velocity TMD systems (with interceptor speeds over three kilometers per second) actually comply with the ABM Treaty. As more sophisticated TMD components are developed, this approach has the potential to generate serious disagreements over critical TMD issues, including air-based laser weapons and space-based tracking and battle-management sensors.

As thorny as the TMD issue has been during the past four years, it apparently was only the prelude to a renewed, more fundamental debate in Congress over whether to deploy an NMD. The Republican-controlled Congress supports an NMD as well as unfettered TMD deployments. Meanwhile, the Clinton administration has found itself squeezed between protecting the ABM Treaty and preserving the nuclear arms reduction process with Moscow on the one hand and managing the constant pressure from a conservative Congress for a firm commitment to missile defenses on the other.

Moscow has made it abundantly clear that it considers the ABM Treaty to be the key to continuing strategic nuclear arms reductions, that it opposes any large-scale NMD deployment, and that it considers the question of TMD deployments far from settled. Congress, on the other hand, believes that the United States should make a commitment now to an NMD; renegotiate or, if necessary, scrap the ABM Treaty to permit a large-scale NMD deployment; and refuse in any way to restrict TMD performance, deployment, or architecture. The future of missile defense may reach a crucial milestone this fall when Congress takes up a bill, already introduced in the House, declaring that “it is the policy of the United States to deploy a national missile defense.”

The Clinton administration has tried to accommodate these conflicting pressures by adopting a so-called “3+3” policy for NMD. This policy calls for continued R&D on NMD until 2000, at which time, if the threat warrants, a deployment decision could be made with the expectation that an NMD system would begin operation three years later. If, however, the threat assessment in 2000 does not justify a deployment decision, then R&D would continue, along with the capability to deploy within three years after a decision is made.

On TMD, the administration adamantly maintains that it has not negotiated a “dumbing down” of U.S. capabilities. Nonetheless, sensing that Senate opposition to limits on TMD can be overcome only by arguing that some understanding on TMD testing is the price for Russian agreement to eliminate multiple-warhead intercontinental ballistic missiles and significantly reduce its strategic nuclear forces, the administration has linked its submission to Congress of the TMD agreements to Russian ratification of the START II Treaty. If, however, the Russian Duma fails to ratify the START II agreements later this fall after President Clinton’s September summit in Moscow, the entire nuclear arms reduction process could collapse under the pressure from Congress for extensive and costly TMD and NMD deployments.

Jack Mendelsohn


International Scientific Cooperation

In August 1991, we traveled to Mexico to meet with policymakers and scientists about the establishment of a United States-Mexico science foundation devoted to supporting joint research on problems of mutual interest. We encountered enthusiasm and vision at every level, including an informal commitment by the Minister of Finance to match any U.S. contribution up to $20 million. At about this time, our article “Fiscal Alchemy: Transforming Debt into Research” (Issues, Fall 1991) sought to highlight three issues: 1) the pressing need for scientific partnerships between the United States and industrializing nations, 2) the mechanism of bilateral or multilateral foundations for funding such partnerships, and 3) the device of debt swaps for allowing debtor nations with limited foreign currency reserves to act as full partners in joint research ventures. We returned from our visit to Mexico flush with optimism about moving forward on all three fronts.

Results, overall, have been disappointing. We had hoped that the debt-for-science concept would be adopted by philanthropic organizations and universities as a way to leverage the most bang for the research buck. This has not taken place. The complexity of negotiating debt swaps and the changing dynamics of the international economy may be inhibiting factors. But much more significant, in our view, is a general unwillingness in this nation to pursue substantive international scientific cooperation with industrializing and developing nations.

Although the National Science Foundation and other agencies do fund U.S. scientists conducting research in the industrializing and developing world, this work does not support broader partnerships aimed at shared goals. Such partnerships can foster the local technological capacities that underlie economic growth and environmental stewardship; we also view them as key to successfully addressing a range of mutual problems, including transborder pollution, emerging diseases, and global climate change. Yet there is a conspicuous lack of attention to this approach at all levels of the administration; most important, the State Department continues to view scientific cooperation as a question of nothing more than diplomatic process.

Incredibly, through 1995 (the latest year for which data are available) the United States has negotiated more than 800 bilateral and multilateral science and technology agreements (up from 668 in 1991), even though virtually none of these are backed by funding commitments. Nor is there any coordination among agencies regarding goals, implementation, redundancy, or follow-up. A report by the RAND Corporation, “International Cooperation in Research and Development,” found little correlation between international agreements and actual research projects. Moreover, although there are few indications that these agreements have led to significant scientific partnerships with industrializing and developing nations, there is plenty of evidence that they support a healthy bureaucratic infrastructure, including, for example, international science and technology offices at the Office of Science and Technology Policy, Department of State, Department of Commerce, and all the technological agencies. We cannot help but think that a portion of the funds devoted to negotiating new agreements and maintaining existing ones might be better spent on cooperative science.

One bright spot in this picture has been the United States-Mexico Foundation for Science, which is off to a promising start despite restricted financial resources. Although Congress approved an appropriation of up to $20 million in 1991, to date the administration has been willing to contribute only $3.8 million to the foundation. Mexico has matched this amount and remains willing to match significantly higher U.S. contributions, which we hope will be forthcoming in the next year. Some additional funds have come from philanthropic organizations. At this early stage, the foundation is focusing especially on issues of water and health in the U.S.-Mexico border region, as well as joint technological workshops and graduate student fellowships. (For more information, see the foundation’s Web site at www.fumec.org.mx.) We remain convinced that the foundation is an important prototype for scientific partnership in an increasingly interconnected and interdependent community of nations.

George E. Brown, Jr.

Daniel Sarewitz

Michael Quear

Environmental Policy in the Age of Genetics

In April 1965, a young researcher at Fairchild Semiconductor named Gordon Moore published an article in an obscure industry magazine entitled “Cramming More Components Onto Integrated Circuits.” He predicted that the power of the silicon chip would double almost annually with a proportionate decrease in cost. Moore went on to become one of the founders of Intel, and his prediction, now known as Moore’s Law, has become an accepted industry truism. Recently, Monsanto proposed a similar law for the area of biotechnology, which states that the amount of genetic information used in practical applications will double every year or two.

Sitting at the intersection of these two laws is a fascinating device known as the gene or DNA chip, a fusion of biology and semiconductor manufacturing technology. Like their microprocessor cousins, gene chips contain a dense grid or array placed on silicon using techniques such as photolithography. In the case of gene chips, however, this array holds DNA probes that form one half of the DNA double helix and can recognize and bind DNA from samples taken from people or organisms being tested. After binding, a laser activates fluorescent dyes attached to the DNA, and the patterns of fluorescence are analyzed to reveal mutations of interest or gene activity. All indicators are that the gene chips are obeying Moore’s Law. Three years ago, the first gene chips held 20,000 DNA probes, last year the chips had 65,000, and chips with over 400,000 probes have recently been introduced. The chips are attracting intense commercial interest. In June 1998, Motorola, Packard Instrument, and the U.S. government’s Argonne National Laboratory signed a multiyear agreement to develop the technologies required to mass-produce gene chips.

We are quickly wandering into an area with few legal protections and even fewer legal precedents in case law.

So what is new about this technology? Experimental chips are already at least 25 times faster than existing gene sequencing methods at decoding information. The chip decodes genetic information a paragraph or page at a time, rather than letter by letter, sequencing an entire genome in minutes and locating missing pieces or structural changes. If we can read a person’s genetic story that fast, we can finish the book in a reasonable amount of time and understand more complex plots and subplots. Existing techniques have been valuable in identifying a small number of changes in the DNA chain commonly known as single nucleotide polymorphisms, which may result in diseases such as sickle cell anemia. However, these approaches have proved too slow and expensive to provide information on polygenic diseases, in which many genes may contribute to the emergence of disease or increased susceptibility to stressors. The gene chips are a key in recognizing this multigene “fingerprint,” which may underlie diseases with complex etiologies involving the interaction of multiple genes as well as environmental factors.

Much environmental regulation protects human health by a very indirect route. For example, a very high dose of a chemical might be found to cause cancer in rats or other laboratory animals. Even though the mechanism by which the cancer is formed may be poorly understood, an estimate is made that a certain amount of that chemical would be harmful to humans. Estimates are then made about what concentration of that chemical in the environment might result in a high level in humans and what level of discharge of that chemical from an industrial plant or other source might result in the dangerously high concentration in the environment. Finally, the facility is told that it must limit its release of that chemical to a specific level, and, in many cases, the technologies to accomplish these reductions are prescribed. This long series of assumptions, calculations, and extrapolations makes the regulatory process slow, inexact, and contentious-a breeding ground for litigation, scientific disputes, and public confusion.

Gene chip technology could turn that system on its head. Biomarkers (substances produced by the body in response to chemicals) have already made it possible to measure the level of a specific chemical such as lead, benzene, or vinyl chloride in an individual’s urine, blood, or tissue. Gene chips will make it possible to observe the actual loss of genetic function and predict susceptibility to change induced by a chemical. As the cost of the technology decreases, it will be possible to do this for many, many more people; ultimately, it might be cost-effective to screen large populations. The focus of environmental management will shift from monitoring the external environment to looking at how external exposures translate into diseases at a molecular level. This could radically change the way we approach environmental risk assessment and management, especially if diagnostic information from the gene chips is used in combination with emerging techniques in the field of molecular medicine. This could open up whole new avenues for prevention and early intervention and allow us to custom-design individual strategies to reduce or avoid a person’s exposure to environmental threats at a molecular level. Some simple intervention measures already exist. For instance, potassium iodide can block a type of radiation that causes thyroid cancer, and the Nuclear Regulatory Commission has recently approved its distribution to residents living in close proximity to nuclear power plants. However, unlocking the Holy Grail of the human genome moves the intervention possibilities to a very different level. New techniques are now being developed that block the ability of environmental toxins to bind to proteins and cause damage, speed up the rate at which naturally occurring enzymes detoxify substances, or enhance the ability of the human body to actually repair environmentally damaged DNA. We move from the end-of-the-pipe world of the 1970s to the inside-the-gene world of the next millennium.

Potential misuse

This potential comes packaged with significant dangers. Francis Collins, director of the Human Genome Project at the National Institutes of Health, recently remarked that the ability to identify individual susceptibility to illness “will empower people to take advantage of preventive strategies, but it could also be a nightmare of discriminatory information that could be used against people.”

Without the proper safeguards in place, possibilities will abound for coercive monitoring, job discrimination, and violations of privacy. From a policy perspective, the danger exists that we could either overreact to these potential problems or react too late. Some of the more obvious issues are being addressed by a part of the Human Genome Project that looks at ethical, legal, and social implications of our expanding knowledge of genetics. However, the privacy and civil liberties debate has tended to mask more subtle, but potentially profound, effects on fields other than medicine. The use of gene chips could forever alter the rules of the game that have dominated environmental protection for 25 years. Here are a number of speculative concerns for those responsible for environmental policy.

First, as such testing and intervention capacity becomes cheaper, more accessible, and more widespread, it puts more power in the hands of the public and the medical profession and takes it away from the high priesthood of toxicologists and risk assessors in our regulatory institutions. This is not necessarily bad, because polls have shown that the public has a greater trust in the medical profession than in the environmental regulatory community. However, it is not at all clear that the medical community wants, or is trained, to take on this role. Research done by Neil Holtzman at the Johns Hopkins School of Medicine has shown that many physicians have a poor understanding of the probabilistic data generated by genetic testing, and other studies have indicated that many physicians are uncomfortable about sharing such information with patients. The few genetic tests already available for diseases such as cystic fibrosis have taxed our capability to provide the counseling needed to deal with patient fears and the new dilemmas of choice. Added to this picture is the potential involvement of the managed care and insurance industries in defining the testing, treatments, costs, and ultimate outcomes. Genetic information could be used by insurance companies to deny coverage to healthy people who have been identified as being susceptible to environmentally related diseases. Knowledge is power, and if the gene chips provide that knowledge to a new set of actors, environmental decisionmaking could be radically altered in ways that provide immense opportunity but that could also result in institutional paralysis, mass confusion, and public distrust.

Second, in a world where environmental policy is increasingly driven and shaped by constituencies, the new technologies offer a stepping stone toward the “individualization” of environmental protection and are a potential time bomb in our litigious culture. The rise of toxic tort litigation over the past 25 years has closely paralleled our scientific ability to show proximate causation; that is, to connect a specific act with a specific effect. Until now, environmental litigation has fallen largely into two classes: class action suits filed by large numbers of individuals exposed to proven carcinogens such as asbestos, or suits brought by people in cases where exposures to environmental agents have lead to identifiable clusters of diseases such as leukemia. The possibility that individuals could acquire enough genetic evidence to support lawsuits for environmental exposures raises some truly frightening prospects. Though workers’ compensation laws generally bar lawsuits for damages resulting from injuries or illnesses in the workplace, loopholes exist, especially if employers learn of exposures and/or susceptibilities through genetic testing and do not notify workers. The expanded use of gene chips for medical surveillance in the workplace increases the possibilities for discrimination across the board. Finally, the testing of large populations with this technology may increase the likelihood of legal disputes based on emerging evidence of gender-, ethnicity-, or race-based variances in susceptibility to environmentally linked diseases. We are quickly wandering into an area with few legal protections and even fewer legal precedents in case law.

Third, the increased knowledge of human genetic variation and vulnerability will likely increase what Edward Tenner of Princeton University has described as the “burden of vigilance”-a need to continuously monitor at-risk individuals and environmental threats at levels far exceeding the capacities of our existing data-gathering systems. This could result in a demand for microlevel monitors for household or personal use, better labeling of products, and far greater scrutiny of the more than 2,000 chemicals that are registered annually by the Environmental Protection Agency (EPA) and used in commerce (we now have adequate human toxicity data on less than 40 percent of these). Much of this new data will not provide unequivocal answers but will require the development of new interpretive expertise and mechanisms to deal with problems such as false positives, which could lead to inaccurate diagnoses and intervention errors.

Finally, though the costs of the chips can be expected to drop, there may be a period of time when they would be available only to the wealthy. This period of time could be much longer if the health care system refused to underwrite their use, making early detection and associated intervention options unavailable to the uninsured and low-income portions of the population who might have high exposures to environmental toxins. This situation would also be found in less developed countries with dirty industries and poor environmental laws, where populations may have few options to monitor exposure and ultimately escape disease. Who will decide who benefits and who does not?

Keeping pace

This is clearly a situation where rapid scientific and technological advance could outrun our institutional capabilities and test our moral fabric. As we all know, social innovation and moral development do not obey Moore’s Law. The most important question is not whether such technologies will be developed and applied (they will) but whether we will be ready as a society to deal with the associated ethical, institutional, and legal implications. Steve Fodor of Affymetrix, one of the leading manufacturers of gene chips, recently remarked that, “Ninety-nine percent of the people don’t have an inkling about how fast this revolution is coming.” Although there has been a recent flurry of attempts by a wide variety of think tanks and policy analysts to “reinvent” the regulatory system, there is no indication that the environmental policy community is paying attention to this development.

This brings us to the final and most important lesson of the gene chip. It was only 35 years ago that Herman Kahn and his colleagues at the RAND Corporation confronted the policymaking community with the possibilities and probable outcomes of another of our large scientific and technological enterprises: The Manhattan Project. By outlining the potential outcomes of a war fought with thermonuclear weapons, they taught us two important things. First, science, and especially big science like the Human Genome Project, has far-reaching effects that are often unintended, unanticipated, and unaddressed by the people directly involved in the scientific enterprise. Second, and probably more important, is that better foresight is possible and can lead to better public policies and decisionmaking. Though the pace of technological change has accelerated, we have forgotten Kahn’s lessons. The elimination of the Office of Technology Assessment in 1996 helped ensure that we will continue to drive through the rapidly changing technological landscape with the headlights off. In times like these, we need more foresight, not less. Embedded in the intriguing question of how the gene chip might affect environmental policy is the larger question of who will ultimately protect us from ourselves, our creations, and ultimately, our hubris. We are placing ourselves in a position described so well over 100 years ago by Ralph Waldo Emerson when he wrote that, “We learn about geology the day after the earthquake.”

Toward a Global Science

In the early 1990s, the Carnegie Commission on Science, Technology, and Government published a series of reports emphasizing the need for a greatly increased role for science and scientists in international affairs. In a world full of conflicting cultural values and competing needs, scientists everywhere share a powerful common culture that respects honesty, generosity, and ideas independently of their source, while rewarding merit. A major aim of the National Academy of Sciences (NAS) is to strengthen the ties between scientists and their institutions around the world. Our goal is to create a scientific network that becomes a central element in the interactions between nations, increasing the level of rationality in international discourse while enhancing the influence of scientists everywhere in the decisionmaking processes of their own governments.

I am pleased to announce that we recently received a letter from the Department of State in which Secretary Madeleine Albright requests that we help the State Department determine “the contributions that science, technology, and health can make to foreign policy, and how the department might better carry out its responsibilities to that end.” I want to begin that effort by suggesting four principles that should guide our activities. Science can be a powerful force for promoting democracy. The vitality of a nation’s science and technology enterprise is increasingly becoming the main driver of economic advancement around the world. Success requires a free exchange of ideas as well as universal access to the world’s great store of knowledge. Historically, the growth of science has helped to spread democracy, and this is even more true today. Many governments around the world exert power over their citizens through the control of information. But restricting access to knowledge has proven to be self-destructive to the economic vitality of nations in the modern world. The reason is a simple one: The world is too complex for a few leaders to make wise decisions about all aspects of public policy.

New scientific and technological advances are essential to accommodate the world’s rapidly expanding population. The rapid rise in the human population in the second half of this century has led to a crowded world, one that will require all of the ingenuity available from science and technology to maintain stability in the face of increasing demands on natural resources. Thus, for example, a potential disaster is looming in Africa. Traditionally, farmers had enough land available to practice shifting cultivation, in which fields were left fallow for 10 or so years between cycles of plantings. But now, because of Africa’s dramatically increasing population, there is not enough land to allow these practices. The result is a continuing process of soil degradation that reduces yields and will make it nearly impossible for Africa to feed itself. The best estimates for the year 2010 predict that fully one-third of the people in Sub-Saharan Africa will have great difficulty obtaining food.

It has been argued that the ethnic conflicts that led to the massacres in Rwanda were in large part triggered by conflicts over limited food resources. We can expect more such conflicts in the future, unless something dramatic is done now. How might the tremendous scientific resources of the developed world be brought to bear on increasing the African food supply? At present, I see large numbers of talented, idealistic young people in our universities who would welcome the challenge of working on such urgent scientific problems. But the many opportunities to use modern science on behalf of the developing world remain invisible to most scientists on our university campuses. As a result, a great potential resource for improving the human condition is being ignored.

Electronic communication networks make possible a new kind of world science. In looking to the future, it is important to recognize that we are only at the very beginning of the communications revolution. For example, we are promised by several commercial partnerships that by the year 2002 good connectivity to the World Wide Web will become available everywhere in the world at a modest cost through satellite communications. Moreover, at least some of these partnerships have promised to provide heavily subsidized connections for the developing world.

Developing countries have traditionally had very poor access to the world’s store of scientific knowledge. With the electronic publication of scientific journals, we now have the potential to eliminate this lack of access. NAS has decided to lead the way with our flagship journal, the Proceedings of the National Academy of Sciences, making it free on the Web for developing nations. We also are hoping to spread this practice widely among other scientific and technical journals, since there is almost no cost involved in providing such free electronic access.

The next problem that scientists in developing countries will face is that of finding the information they need in the mass of published literature. In 1997, the U.S. government set an important precedent. It announced that the National Library of Medicine’s indexing of the complete biomedical literature would be made electronically available for free around the world through a Web site called PubMed. The director of the PubMed effort, David Lipman, is presently investigating what can be done to produce a similar site for agricultural and environmental literature.

The communications revolution also is driving a great transformation in education. Already, the Web is being used as a direct teaching tool, providing virtual classrooms of interacting students and faculty. This tool allows a course taught at one site to be taken by students anywhere in the world. Such technologies present an enormous opportunity to spread the ability to use scientific and technical knowledge everywhere; an ability that will be absolutely essential if we are to head for a more rational and sustainable world in the 21st century. Science academies can be a strong force for wise policymaking. In preparing for the future, we need to remember that we are only a tiny part of the world’s people. In 1998, seven out of every eight children born will be growing up in a developing nation. As the Carnegie Commission emphasized, we need more effective mechanisms for providing scientific advice internationally, particularly in view of the overwhelming needs of this huge population.

In 1993, the scientific academies of the world met for the first time in New Delhi; the purpose was to address world population issues. The report developed by this group of 60 academies was presented a year later at the 1994 UN Conference at Cairo. Its success has now led to a more formal collaboration among academies, known as the InterAcademy Panel (IAP). A common Web site for the entire group will soon be online, and the IAP is working toward a major conference in Tokyo in May of 2000 that will focus on the challenges for science and technology in making the transition to a more sustainable world.

Inspired by a successful joint study with the Mexican academy that produced a report on Mexico City’s water supply, we began a study in 1996 entitled “Sustaining Freshwater Resources in the Middle East” as a collaboration among NAS, the Royal Scientific Society of Jordan, the Israel Academy of Sciences and Humanities, and the Palestine Health Council. The final version of this report is now in review, and we expect it to be released this summer. I would also like to highlight a new energy study that we initiated this year with China. Here, four academies-two from the United States and two from China-are collaborating to produce a major forward-looking study of the energy options for our two countries. Recently, the Indian Science and Engineering Academies have indicated an interest in carrying out a similar energy study with us. I believe that these Indian and Chinese collaborations are likely to lead us all toward a wiser use of global energy resources.

My dream for the IAP is to have it become recognized as a major provider of international advice for developing nations, the World Bank, and the many similar agencies that require expert scientific and technical assistance. Through an IAP mechanism, any country or organization seeking advice could immediately call on a small group of academies of its choosing to provide it with politically balanced input coupled with the appropriate scientific and technical expertise.

The road from here

In the coming year, NAS will attempt to prepare an international science road map to help our State Department. My discussions with the leaders of academies in developing countries convince me that they will need to develop their own road maps in the form of national science policies. To quote José Goldemberg, a distinguished scientific leader from Brazil: “What my scientist colleagues and national leaders alike failed to understand was that development does not necessarily coincide with the possession of nuclear weapons or the capability to launch satellites. Rather, it requires modern agriculture, industrial systems, and education . . . This scenario means that we in developing countries should not expect to follow the research model that led to the scientific enterprise of the United States and elsewhere. Rather, we need to adapt and develop technologies appropriate to our local circumstances, help strengthen education, and expand our roles as advisers in both government and industry.”

In his work for the Carnegie Commission, Jimmy Carter made the following observations about global development: “Hundreds of well-intentioned international aid agencies, with their own priorities and idiosyncrasies, seldom cooperate or even communicate with each other. Instead, they compete for publicity, funding, and access to potential recipients. Overburdened leaders in developing countries, whose governments are often relatively disorganized, confront a cacophony of offers and demands from donors.”

My contacts with international development projects in agriculture have made me aware that many experiments are carried out to try to improve productivity. A few are very successful, but many turn out to be failures. The natural inclination is to hide all of the failures. But as every experimental scientist knows, progress is made from learning from what did not work, and then improving the process by incorporating this knowledge into a general framework for moving forward. As scientists, I would hope that we could lead the world toward more rational approaches to improving international development efforts.

The U.S. economy is booming. But as I look around our plush shopping malls, observing the rush of our citizens to consume more and more, I wonder whether this is really progress. In thinking about how our nation can prove itself as the world leader it purports to be, we might do well to consider the words of Franklin Roosevelt: “The test of our progress is not whether we add more to the abundance of those who have much; it is whether we provide enough for those who have little.” As many others have pointed out, every year the inequities of wealth are becoming greater within our nation and around the world. The spread of scientific and technological information throughout the world, involving a generous sharing of knowledge resources by our nation’s scientists and engineers, can improve the lives of those who are most in need around the globe. This is a challenge for science and for NAS.

Something Old, Something New

First, I want to welcome back the National Academy of Engineering as a sponsor of Issues. NAE was an original sponsor and supported the magazine for more than a decade. During a period of transition in its leadership, it suspended its sponsorship, but now that it has regained its equilibrium under the leadership of Wm. A. Wulf, it has renewed its commitment to Issues. Even when NAE was not an active sponsor, Issues addressed the subjects of technology and industry that are of interest to NAE members. With NAE back as an active participant, we should be able to strengthen our coverage of these topics. This issue’s cover stories on the relationship between information technology and economic productivity should be of particular interest to NAE members.

Second, I want to announce an initiative to enhance our online presence. For several years, we have been posting on our website the table of contents and several articles from each issue. Beginning with the fall issue, we will make the entire contents of each issue available online, and we will create a searchable database of back issues. This database will be integrated with the much larger database of publications from the National Academy Press. A search for material about Superfund, for example, will turn up references to National Research Council reports as well as to Issues articles. Our hope is that this resource will be indispensable to public policy researchers.

In addition, we want to transform the Forum section into an active online debate. Forum letters will be posted as soon as they arrive, authors will be encouraged to respond to the letters, and everyone will be invited to participate by commenting on the letters or the original articles. We believe that this feature is particularly valuable for a quarterly publication. It will mean that it’s not necessary to wait three months to hear responses to articles, that realtime policy debate will be possible, and that there need be no space limitations to constrain comment. Forum has always been one of the most popular sections of the magazine, and this can only enhance its value.

Electronic economics

Access to the Issues website will be free. In this, we are following the example of the National Academy Press, which in 1997 put its entire backlist of publications online with free access. Although some worried that this would reduce sales, the opposite occurred. People became interested in what they found online and opted to buy the printed version. Book sales have increased.

It appears that the Internet is a good way to find information but that print is still the preferred way to use it. Reading on the screen is difficult, printing web pages is slow, and bound books and magazines are convenient to hold and store. The time may come when electronic text rivals the printed word for convenience, but it’s not here yet. We expect that web visitors who find Issues useful will want to subscribe to the print version. And for those who can’t afford it or who use it rarely, we will be providing a public service.

The goal of online publishing is not simply to produce the electronic equivalent of the print edition. The true value of the World Wide Web is its unlimited linking ability. An online version can provide much more than a print edition. When an author cites a specific report, a click on the mouse can call that report to the screen. A data reference can be linked to the full set of data from which the reference was drawn. Recommended reading becomes a list of quick links to the full text of the publications. Combined with the capacity for instantaneous online debate, linking makes online publishing much livelier and more interactive.

Finally, we would like to be able to alert you when new material appears on our website. It’s frustrating to pay repeated visits to a site and find nothing new. That’s not likely with the NAS and NAP websites, which are updated regularly, but we would like to make it easier for you to decide when you want to surf in. We plan to develop an electronic mailing list to which we would send alerts announcing the presence of new material on the website. In this way you will know when something of interest is posted without having to take the time to visit the site. If you want to be placed on this list, please send your e-mail address to [email protected]. Eventually, we want to code this list with your interests so that you receive an alert only when it refers to topics that you specify. Although Issues will not be available online for a few months, you can already find an abundance of valuable information at the National Academy of Sciences (www.nas.edu and the National Academy Press (www.nap.edu).

To help us understand how our readers use the Internet (and to update information that is useful to the editorial and business decisions at Issues), we have incorporated into this issue a brief reader survey. We would be very grateful if you would take the time to complete the survey and return it to us by fax or mail. Once we have established an active online presence, surveys such as this will be less necessary. But for now, it’s the best way for us to stay in touch with you. Please respond.

Shaping a Smarter Environmental Policy for Farming

In the summer of 1997, Maryland Governor Parris Glendening suddenly closed two major rivers to fishing and swimming, after reports of people becoming ill from contact with the water. Tests uncovered outbreaks of a toxic microbe, Pfiesteria piscicida, perhaps caused by runoff of chicken manure that had been spread as fertilizer on farmers’ fields. Glendening’s action riveted national attention on a long-overlooked problem: the pollution of fresh water by agricultural operations. When the governor then proposed a ban on spreading chicken manure, the state’s poultry producers lashed back, claiming they would go out of business if they had to pay to dispose of the waste.

The controversy, and others springing up in Virginia, Missouri, California, and elsewhere, has galvanized debate among farmers, ranchers, environmentalists, and regulators over how to control agricultural pollution. The days of relying on voluntary controls and payments to farmers for cutbacks are rapidly ending. A final policy is far from settled, but even defenders of agriculture have endorsed more aggressive approaches than were considered feasible before recent pollution outbreaks.

Agricultural runoff is the primary cause of the degradation of groundwater and surface waters.

Maryland’s proposed ban is part of a state-led shift toward directly controlling agricultural pollution. Thirty states have at least one law with enforceable measures to reduce contamination of fresh water, most of which have been enacted in the 1990s. Federal policy has lagged behind, but President Clinton’s Clean Water Action Plan, introduced in early 1998, may signal a turn toward more direct controls as well. After decades of little effort, state and federal lawmakers seem ready to attack the problem. But there is a serious question as to whether they are going about it in the best way.

The quality of U.S. rivers, lakes, and groundwater has improved dramatically since the 1972 Clean Water Act, which set in motion a series of controls on effluents from industry and in urban areas. Today, states report that the condition of two-thirds of surface water and three-fourths of groundwater is good. But where there is still degradation, agriculture is cited as the primary cause. Public health scares have prompted legislators to take action on the runoff of manure, fertilizer, pesticides, and sediment from farmland.

Although it is high time to deal with agriculture’s contribution to water pollution, the damage is very uneven in scope and severity; it tends to occur where farming is extensive and fresh water resources are vulnerable. Thus, blanket regulations would be unwise. There is also enormous inertia to overcome. For decades, the federal approach to controlling agriculture has been to pay farmers not to engage in certain activities, and agricultural interest groups have resisted any reforms that don’t also pay.

Perhaps the most vexing complication is that scientists cannot conclusively say whether specific production practices such as how manure and fertilizer is spread and how land is tiered and tilled will help, because the complex relationship between what runs off a given parcel of land and how it affects water quality is not well understood. Prescribing best practices amounts to guesswork in most situations, yet that is what current proposals do. Unless a clear scientific basis can be shown, the political and monetary cost of mandating and enforcing specific practices will be great. Farmers will suffer from flawed policies, and battle lines will be drawn. Meanwhile, the slow scientific progress in unraveling the link between farm practices and water pollution will continue to hamper innovation that could solve problems in cost-effective ways.

Better policies from the U.S. Department of Agriculture (USDA), the Environmental Protection Agency (EPA), and state agricultural and environmental departments are certainly needed. But which policies? Because the science to prove their effectiveness does not exist, mandating the use of certain practices is problematic. Paying farmers for pollution control is a plain subsidy, a tactic used for no other U.S. industry. A smarter, incentive-based approach is needed. Happily, such an approach does exist, and its lessons can be applied to minimizing agriculture’s adverse effects on biodiversity and air pollution as well.

Persistent pollution

Farms and ranches cover about half of the nation’s land base. Recent assessments of agriculture’s effects on the environment by the National Research Council (NRC), USDA, and other organizations indicate that serious environmental problems exist in many regions, although their scope and severity vary widely. Significant improvements have been made during the past decade in controlling soil erosion and restoring certain wildlife populations, but serious problems, most notably water pollution, persist with no prospect of enduring remedies.

The biggest contribution to surface water and groundwater problems is polluted runoff, which stems from soil erosion, the use of pesticides, and the spreading of animal wastes and fertilizers, particularly nitrogen and phosphorus. Annual damages caused by sediment runoff alone are estimated at between $2 billion and $8 billion. Excessive sediment is a deceptively big problem: As it fills river beds, it promotes floods and burdens plants for processing municipal drinking water. It also clouds rivers, decreasing sunlight, which in turn lowers oxygen levels and chokes off life in the water.

National data on groundwater quality have been scarce because of the difficulty and cost of monitoring. EPA studies in the late 1980s showed that fewer than 1 percent of community water systems and rural wells exceeded EPA’s maximum contaminant level of pesticides. Fewer than 3 percent of wells topped EPA’s limit for nitrates. However, the percentages still translate into a large number of unsafe drinking water sources, and only a fraction of state groundwater has been tested. The state inventory data on surface water quality is limited too, covering only 17 percent of the country’s rivers and 42 percent of its lakes. A nationally consistent and comprehensive assessment of the nation’s water quality does not exist and is not feasible with the state inventory system. We therefore cannot say anything definitive about agriculture’s overall role in pollution.

Nonetheless, we know a good deal about water conditions in specific localities, enough to improve pollution policy. Important progress is being made by the U.S. Geological Survey (USGS), which began a National Water Quality Assessment (NAWQA) in the 1980s precisely because we could not construct an accurate national picture. USGS scientists estimated in 1994 that 71 percent of U.S. cropland lies in watersheds where at least one agricultural pollutant violates criteria for recreational or ecological health. The Corn Belt is a prime example. Hundreds of thousands of tons of nutrients-nitrogen and phosphorus from fertilizers and animal wastes-are carried by runoff from as far north as Minnesota to Louisiana’s Gulf Coast estuaries. The nutrients cause excessive algae growth, which draws down oxygen levels so low that shellfish and other aquatic organisms die. (This process has helped to create a “dead” zone in the Gulf of Mexico– a several-hundred-square-mile area that is virtually devoid of life.) Investigators have traced 70 percent of the fugitive nutrients that flow into the Gulf to areas above the confluence of the Ohio and Mississippi Rivers. In a separate NAWQA analysis, most nutrients in streams-92 percent of nitrogen and 76 percent of phosphorus-were estimated to flow from nonpoint or diffuse sources, primarily agriculture. USGS scientists also estimated that more than half the phosphorus in rivers in eight Midwestern states, more than half the nitrate in seven states, and more than half the concentrations of atrazine, a common agricultural pesticide, in 16 states all come from sources in other states. Hence those states cannot control the quality of their streams and rivers by acting alone.

Groundwater pollution is another problem. Groundwater supplies half the U.S. population with drinking water and is the sole source for most rural communities. Today, the most serious contamination appears to be high levels of nitrates from fertilizers and animal waste. USGS scientists have found that 12 percent of domestic wells in agricultural areas exceed the maximum contaminant level for nitrate, which is more than twice the rate for wells in nonagricultural areas and six times that for public wells. Also, samples from 48 agricultural areas turned up pesticides in 59 percent of shallow wells. Although most concentrations were substantially below EPA water standards, multiple pesticides were commonly detected. This pesticide soup was even more pronounced in streams. No standards exist for such mixtures.

These results are worrisome enough, and outbreaks of illness such as the Pfiesteria scourge have heightened awareness. But what has really focused national attention on agriculture’s pollution of waterways has been large spills of animal waste from retention ponds. According to a study done by staff for Sen. Tom Harkin (D-Iowa), Iowa, Minnesota, and Missouri had 40 large manure spills in 1996. When a dike around a large lagoon in North Carolina failed, an estimated 25 million gallons of hog manure (about twice the volume of oil spilled by the Exxon Valdez accident) was released into nearby fields and waterways. Virtually all aquatic life was killed along a 17-mile stretch of the New River. North Carolina subsequently approved legislation that requires acceptable animal waste management plans. EPA indicates that as many as two-thirds of confined-animal operations across the nation lack permits governing their pollution discharges. Not surprisingly, a major thrust of the new Clean Water Action Plan is to bring about more uniform compliance for large animal operations.

Dubious tactics

Historically, environmental programs for agriculture have used one of three approaches, all of which have questionable long-term benefits. Since the Great Depression, when poor farming practices and drought led to huge dust storms that blackened midwestern skies, the predominant model for improving agriculture’s effects on the environment has been to encourage farmers to voluntarily change practices. Today, employees of state agencies and extension services and federal conservation agencies visit farmers, explain how certain practices are harming the land or waterways, and suggest new techniques and technologies. The farmers are also told that if they change damaging practices or choose new program X or technology Y, they can get payments from the state or federal government.

Current approaches to limit the environmental effects of agriculture are costly and provide little guarantee of long-term protection.

Long-term studies indicate that these voluntary payment schemes have been effective in spurring significant change; however, as soon as the payments stop, use of the practices dwindles. The Conservation Reserve Program (CRP) now sets aside about 30 million acres of environmentally vulnerable land. Under CRP, farmers agree to retire eligible lands for 10 years in exchange for annual payments, plus cost sharing to establish land cover such as grasses or trees. About 10 percent of the U.S. cropland base has been protected in this way, at a cost of about $2 billion a year.

Although certain parcels of this land should be retired from intensive cultivation because they are too fragile to be farmed, we may be overdoing it with CRP. Some of this land will be needed to produce more food as U.S. and world demand grows. Much of it could be productively cultivated with new techniques, thereby producing profitable crops, reducing water pollution, and costing taxpayers nothing. One of the most prominent new techniques is no-till farming, which is done with new machines that cut thin parallel grooves in soil and simultaneously plant seeds, which not only minimizes runoff but reduces a farmer’s cost. Studies show that no-till farming is usually more profitable than full plowing because of savings in labor, fuel, and machinery.

Evidence suggests that CRP’s gains have been temporary. As with the similar Soil Bank program of the 1960s, once contracts expire, virtually all lands are returned to production. Unless the contracts are renewed indefinitely, most of the 30 million acres will again be farmed, again threatening the environment if farmers fail to adopt no-till practices.

The second approach involves compliance schemes. To receive payments from certain agricultural programs, a farmer must meet certain conservation standards. The 1985 Food Security Act contained the boldest set of compliance initiatives in history. Biggest among them was the Conservation Compliance Provision, which required farmers to leave a minimum amount of crop residues on nearly 150 million acres of highly erodible cropland. In effect, these provisions established codes of good practice for farmers who received public subsidies, and they were a first step toward more direct controls. However, these programs are probably doomed. The general inclination of government and the public to eliminate subsidies led to passage of federal farm legislation in 1996 that includes plans to phase out payment programs by 2002.

The third approach to reducing agriculture’s impact on the environment involves direct regulation of materials such as pesticides that are applied to the land. These programs have been roundly criticized from all quarters. Farm groups complain that pesticide regulation has been too harsh. Environmental groups counter that although the regulations specify the kinds of pesticides that can be sold and the crops they can be used on, they do not restrict the amount of pesticide that can be spread. Even if regulations did specify quantity, enforcement would be virtually impossible. The registration process for pesticide use has also been miserably slow and promises to get slower as a result of the 1996 Food Quality Protection Act, which requires the reregistration of all pesticides against stricter criteria.

In sum, current approaches to limit the environmental effects of agriculture have cost taxpayers large amounts of money with little guarantee of long-term protection. Unless a steady stream of federal funding continues, many of the gains will evaporate. And the idea of paying people not to pollute is becoming increasingly untenable, especially at the state level.

Getting smarter

Four actions are needed to establish a smarter environmental policy for agriculture.

Set specific, measurable environmental objectives. Without quantifiable targets, an environmental program cannot be properly guided. To date, most programs have called for the use of specific farming practices rather than setting ambient quality conditions for surface water and groundwater. This is largely because of political precedent and because of the complex nonpoint nature of many pollution problems. However, setting a specific water quality standard, such as nitrate or pesticide concentration in drinking water, presumes that the science exists to trace contaminants back to specific lands. Such research is currently sparse, although major assessments by the NRC and others indicate that clearer science is possible. Setting standards would help stimulate the science.

Several states are taking the lead in setting standards. Nebraska has set maximum groundwater nitrate concentration levels; if tests shows concentrations above the standard, restrictions on fertilizer use can be imposed. Florida has implemented controls on the nutrient outflows from large dairies into Lake Okeechobee, which drains into the Everglades. In Oregon, environmental regulators set total maximum daily loads of pollutants discharged into rivers and streams, and the state Department of Agriculture works with farmers to reduce the discharges. Voluntary measures accompanied by government payments are tried first, but if they are not sufficient, civil fines can be imposed in cases of excessive damages.

The federal government can support the states’ lead by setting minimum standards for particular pollutants that pose environmental health risks, such as nitrates and phosphorus. The Clean Water Action Plan would establish such criteria for farming by 2000 and for confined animal facilities by 2005. Standards for sediment should be set as well.

There is no easy way around the need for a statutory base that defines what gets done, when it gets done, and how it gets done at the farm, county, state, regional, and national levels. Unless those specific responsibilities are assigned, significant progress on environmental problems will not be made.

Create a portfolio of tangible, significant incentives. Without sufficient incentives, we have little hope of meeting environmental objectives. The best designs establish minimum good-neighbor performance, below which financial support will not be provided, and set firm deadlines beyond which significant penalties will be imposed. Incentive programs could include one-stop permitting for all environmental requirements, such as Idaho’s “One Plan” program, which saves farmers time and money; “green payments” for farms that provide environmental benefits beyond minimum performance; a system for trading pollution rights; and local, state, or national tax credits for exemplary stewardship.

It is important to stress that a silver-bullet approach to the use of incentives does not exist. The most cost-effective strategy for any given farm or region will be a unique suite of flexible incentives that fit state and local environmental, economic, and social conditions. Although the use of flexible incentives can require substantial administrative expense, they can also trigger the ingenuity of farmers and ranchers, much as market signals have done for the development of more productive crops and livestock.

Although incentives are preferable, penalties and fines will still be needed. Pollution from large factory farms is now spurring states and the federal government to apply to farms the strict limits typically set for other industrial factories. Some of these farms keep more than half a million animals in small areas. The animals can generate hundreds of millions of gallons of wastes per year–as much raw sewage as a mid-sized city but without the sewage treatment plants. The wastes, which are stored in open “lagoons” or spread on fields as fertilizer, not only produce strong odors but can end up in streams and rivers and possibly contaminate groundwater. In 1997, North Carolina, which now generates more economic benefits from hog farms than it does from tobacco, imposed sweeping new environmental rules on hog farming. Under the Clean Water Action Plan, EPA is proposing to work with the states to impose strict pollution-discharge permits on all large farms by 2005. EPA also wants to dictate the type of pollution-control technologies that factory farms must adopt.

Because pollution problems are mostly local, states must do more than the federal government to create a mix of positive and negative incentives, although the federal government must take the lead on larger-scale problems that cross state boundaries. Both the states and the federal government should first focus on places a clear agriculture-pollution link can be shown and the potential damages are severe.

Harness the power of markets. Stimulating as much private environmental initiative as possible is prudent, given the public fervor for shrinking government. The 1996 Federal Agriculture Improvement and Reform Act took the first step by dismantling the system of subsidizing particular crops, which had encouraged farmers to overplant those crops and overapply fertilizers and pesticides in many cases. The potential for using market forces is much broader.

One of the latest and most effective mechanisms may be a trading system for pollution rights. A trading system set up under the U.S. Acid Rain Program has been very effective in reducing air pollution, and trading systems are being proposed to meet commitments made in the recently signed Kyoto Protocol to reduce emissions of greenhouse gases.

Trading systems work by setting total pollution targets for a region, then assigning a baseline level of allowable pollution to each source. A company that reduces emissions below its baseline can sell the shortfall to a company that is above its own baseline. The polluter can then apply that allowance to bring itself into compliance. The system rewards companies that reduce emissions in low-cost ways and helps bad polluters buy time to find cost-effective ways to reduce their own emissions.

A few trading systems are already being tried in agriculture. Farms and industrial companies on the North Carolina’s Pamlico Sound are authorized to trade water pollution allowances, but few trades have taken place thus far because of high transaction costs. Experiments are also under way in Wisconsin and Colorado, but the complications of using trading systems for nonpoint pollution will slow implementation.

Pollution taxes can also create incentives for change. Economists have proposed levying taxes that penalize increases in emissions. Some also propose using the proceeds to reward farmers who keep decreasing their emissions below the allowable limit. The tax gives farmers the flexibility to restructure their practices, but political opposition and potentially high administrative costs have hindered development.

It is time to make pollution prevention and control an explicit objective of agricultural R&D policy.

One other market mechanism that is cost-effective and nonrestrictive is facilitating consumer purchases of food that is produced by farmers who use minimal amounts of pesticides and synthetic fertilizers. Food industry reports indicate that a growing segment of the public will pay for food and fiber cultivated in environmentally friendly ways. The natural foods market has grown between 15 and 20 percent per year during the past decade, compared with 3 to 4 percent for conventional food products. If this trend continues, natural foods will account for nearly one-quarter of food sales in 10 years. Because organic foods command higher prices, farmers can afford to use practices that reduce pollution, such as crop rotation and biologically based pest controls.

Government can play a stronger role in promoting the sale of natural foods. It should make sure that consumers have accurate information by monitoring the claims of growers and retailers and establishing production, processing, and labeling standards. One experiment to watch is in New York, where the Wegman’s supermarket chain is promoting the sale of “IPM” foods grown by farmers who have been certified by the state as users of integrated pest management controls.

Stimulate new research and technology. One of the most overlooked steps needed to establish smarter environmental policy for agriculture is better R&D. Most research to date has focused on remediation of water pollution, rather than forward-looking work that could prevent pollution. Over the years, research for environmental purposes should have increased relative to food production research, but it is not clear that it has.

What is most needed is better science that clarifies the links between agricultural runoff and water quality. As stated earlier, this will be forced as regulations are imposed, but dedicated research by USDA, EPA, and state agricultural and environmental departments should begin right away.

R&D to produce better farm technology is also needed. Despite an imperfect R&D signaling process, some complementary technologies that simultaneously enhance environmental conditions and maintain farm profit have emerged. Examples include no-till farming, mulch-till farming, integrated pest management, soil nutrient testing, rotational grazing (moving livestock to different pastures to reduce the buildup of manure, instead of collecting manure), and organic production. Most of these techniques require advanced farming skills but have a big payoff. No-till and mulch-till farming systems, for example, have transformed crop production in many parts of the nation and now account for nearly 40 percent of planted acres. However, these systems were driven by cost savings from reduced fuel, labor, and machinery requirements and could improve pollution control even further if developed with this goal in mind. Integrated pest management methods generally improve profits while lowering pesticide applications, but they could benefit from more aggressive R&D strategies. A farmer’s use of simple testing procedures for nutrients in the soil before planting has been shown to reduce nitrogen fertilizer applications by about one-third in some areas, saving farmers $4 to $14 per acre, according to one Pennsylvania study.

Other technologies are emerging that have unknown potential, including “precision farming” and genetic engineering of crops to improve yield and resist disease. Precision farming uses yield monitors, computer software, and special planting equipment to apply seeds, fertilizers, and pesticides at variable rates across fields, depending on careful evaluation and mapping techniques. This suite of complementary technologies has developed mostly in response to the economic incentive to reduce input costs or increase yields. Their full potential for environmental management has been neglected. It is time to make pollution prevention and control an explicit objective of agricultural R&D policy.

Accountability and smart reform

The long-standing lack of public and legislative attention to agricultural pollution is changing. Growing scrutiny suggests that blithely continuing down the path of mostly voluntary-payment approaches to pollution management puts agriculture in a vulnerable position. As is happening in Maryland, a single bad incident could trigger sweeping proposals–in that case, possibly an outright ban against the spreading of chicken manure on fields–that would impose serious costs on agriculture. A disaster could cause an even stronger backlash; the strict clean-water regulations of the 1970s came in torrents after the Cuyahoga River in Ohio actually caught fire because it was so thick with industrial waste.

The inertia that pervades agriculture is understandable. For decades farmers have been paid for making changes. But attempts by agricultural interest groups to stall policy reforms, including some important first steps in the Clean Water Action Plan, will hamper farming’s long-term competitiveness, or even backfire. Resistance will invite more direct controls, and slow progress on persistent environmental problems will invite further government intervention.

Under the smarter environmental policy outlined above, farmers, environmental interest groups, government agencies, and the scientific community can create clear objectives and compelling incentives to reduce agricultural pollution. Farmers that deliver environmental benefits beyond their community responsibilities should be rewarded for exemplary performance. Those that fall short should face penalties. We ask no less from other sectors of the economy.

Forum – Summer 1998

Climate change

Robert M. White’s “Kyoto and Beyond” and Rob Coppock’s “Implementing the Kyoto Protocol” (Issues, Spring 1998) are excellent overviews of the issues surrounding the Kyoto Protocol. As chairman of the House Science Committee, I have spent a great deal of time analyzing the Kyoto Protocol, including chairing three full Science Committee hearings this year on the outcome and implications of the Kyoto negotiations. And in December 1997, I led the congressional delegation at the Kyoto conference.

The facts I have reviewed lead me to believe that the Kyoto Protocol is seriously flawed-so flawed, in fact, that it cannot be salvaged. The treaty is based on immature science, costs too much, leaves too many procedural questions unanswered, is grossly unfair because it excludes participation by developing countries, and will do nothing to solve the supposed problem it is intended to solve. Nothing I have heard to date has persuaded me otherwise.

Those who argue that the science of climate change is a settled issue should take notice of the National Academy of Sciences’ National Research Council (NRC) Committee on Global Change’s report, entitled Global Environmental Change: Research Pathways for the Next Decade, issued May 19, 1998. The NRC committee, charged with reviewing the current status of the U.S. Global Change Research Program, stated that the Kyoto agreements “are based on a general understanding of some causes and characteristics of global change; however, there remain many scientific uncertainties about important aspects of climate change.” And Appendix C of the report’s overview document lists more than 200 scientific questions that remain to be adequately addressed.

I want to note one major issue not discussed by White or Coppock-the Kyoto Protocol’s impact on the U.S. armed forces. Because the Department of Defense is the federal government’s largest single user of energy and largest emitter of greenhouse gases, the protocol essentially imposes restrictions on military operations, in spite of Pentagon analyses showing that such restrictions would significantly downgrade the operational readiness of our armed forces. In addition, the protocol would hamper our ability to conduct unilateral operations such as we undertook in Grenada, Libya, and Panama. On May 20, 1998, the House resoundingly rejected these restrictions by a vote of 420 to 0 by approving the Gilman-Danner-Spence-Sensenbrenner-Rohrabacher amendment prohibiting any provision of law, any provision of the Kyoto Protocol, or any regulation issued pursuant to the protocol from restricting the procurement, training, or operation and maintenance of U.S. armed forces.

This unanimous vote of no confidence in the Kyoto Protocol follows last summer’s Senate vote of 95 to 0 vote urging the administration not to agree to the Protocol if developing countries were exempted-an admonition ignored by the administration. These two “nos” to Kyoto mean the agreement is in serious trouble on Capitol Hill.

REPRESENTATIVE F. JAMES SENSENBRENNER, JR.

Republican of Wisconsin

Chairman, House Science Committee


Like Rob Coppock, I believe that setting a drop-dead date of 2010 for reducing global CO2 emissions by 7 percent below 1990 levels is unrealistic and even economically unsound. Experience has shown, at least in the United States, that citizens become energy-sensitive only when the issue hits them in the pocketbook. (One need only look at the ever-increasing demand for fuel-guzzling sport utility vehicles that has accompanied the historic low in gas prices.). If prices were raised even to their current level in Europe (about $4 per gallon), I think you would have the makings of profound social unrest in the United States.

If dramatic increases in fuel prices through tax increases are politically difficult (if not impossible), then the only alternative available to governments is the power to regulate-to require the use of processes and products that use less energy and emit less CO2. Germany, as well as most other European nations, has traditionally used high energy costs to encourage consumers to reduce demand and increase efficiency. Germany’s energy use per capita is about half that of the United States, but I don’t think that you find any major differences in standard of living between Germans and Americans. This indicates that there are many ways to improve energy efficiency and thus reduce emissions in the United States.

I prefer slow but steady progress toward reduction of carbon emissions, taking into account both the long- and short-term economic implications of taxation and regulation. As Coppock points out, “the gain from rushing to do everything by 2010 is nowhere near the economic pain.” Just as one sees the “magic” of compound interest only at the end of a long and steady investment program, we can provide a better global climate future for generations yet unborn through consistent actions taken now.

Kyoto is a very significant step in affecting how future generations will judge our efforts to halt global warming at a tolerable level. Germany stands ready to support the spirit of the Kyoto agreement and to help all nations in achieving meaningful improvements in energy use and efficiency. It has offered to host the secretariat for implementing the Kyoto Protocol, as defined in the particulars to be decided in November 1998 by the Conference of the Parties. One can only hope that these definitions recognize some of the points that Coppock and I have raised, especially the economic implications of massive efforts to meet an arbitrary date.

HEINZ RIESENHUBER

Member, German Bundestag

Bonn, Germany

Former German Minister for Science and Technology


The articles by Robert M. White and Rob Coppock support what industry has been saying for years: Near-term actions to limit greenhouse gas emissions are costly and would divert scarce capital from technological innovation. Building policy around technology’s longer time horizon, rather than the Kyoto Protocol’s 10 to 12 years, means that consumers and businesses could rationally replace existing capital stock with more energy-efficient equipment, vehicles, and processes. Avoided costs free up resources for more productive investments, including energy technologies and alternative energy sources.

White notes that the Kyoto Protocol is “at most . . . a small step in the right direction.” Worse, trying to implement it would mean “carrying out a massive experiment with the U.S. economy with unknown results.” What we do know is that all economic models that don’t include unrealistic assumptions indicate negative results.

Most of White’s “useful actions” are on target: pay attention to concentrations, not emissions; adaptation has been and will remain “the central means of coping with climate change;” disassociating costs and benefits attracts free riders; “population stabilization can have an enormous impact on emissions reduction.” Although he’s right to call for more technological innovation, he may have overstated our grasp of climate science when he says that “only through the development of new and improved energy technologies can reductions in greenhouse gas emissions of the necessary magnitude be achieved without significant economic pain.” His closing paragraph is closer to the mark: “If climate warming indeed poses a serious threat to society . . .” Finally, wind, photovoltaics, and biomass are still not economically competitive except in niche markets, nor can companies yet stake their future on hybrid electric or fuel cell cars. These technologies show great promise, but their costs probably will remain high in the time frame defined by the Kyoto Protocol.

Coppock’s most insightful comment about the protocol is that “no credit is given for actions that would reduce emissions in future periods” and this “creates a disincentive for investments” in new technologies. He also puts CO2 emissions in perspective (1850 levels will double by around 2100), rejects the protocol’s timetable (“the gain from rushing to do everything by 2010 is nowhere near worth the economic pain”), and cautions against assuming that a tradable permits regime would be easy to set up (“the trading is between countries. But countries don’t pollute; companies and households do”) or maintain (“how would pollution from electricity generated in France but consumed in Germany be allocated?”).

However, he seems willing to create a large UN bureaucracy to enforce a bad agreement. Moreover, his model, the International Atomic Energy Agency, is a recipe for massive market intervention: IAEA’s implementation regime “of legally binding rules and agreements, advisory standards, and regulations” includes the all-too-common industry experience of governments turning “today’s nonbinding standards [into] tomorrow’s binding commitments.” The Kyoto Protocol goes IAEA one bureaucratic step better: Any government that ratifies the protocol grants administrators the right to negotiate future and more stringent emission targets.

As Coppock concludes, “the world’s nations may be better off scrapping the Kyoto Protocol and starting over.” To which White adds: “Developments in energy technology show promise, and there has been a gradual awakening to this fact.” Both steps are needed if we are to have a dynamic strategy that reflects a wide range of plausible climate outcomes and also gives policymakers room to adjust as new scientific, economic, and technological knowledge becomes available.

WILLIAM F. O’KEEFE

President

American Petroleum Institute

Washington, D.C.


Since the emergence of the Berlin Mandate, the AFL-CIO has been on public record in opposition to the direction in which international climate negotiations have been headed. Upon the conclusion of the Kyoto round, we denounced the treaty but made it clear that, regardless of a flawed treaty, we want to be a part of solving this real and complicated global problem. To that end, we are working with allies who want not only to examine real solutions to climate change but also to address the economic consequences those solutions present for U.S. workers and their communities.

The articles by Rob Coppock and Robert M. White mirror our concerns about the correct approach regarding action on the global climate change issue. We have taken a straightforward position: A global concentration target must be identified so that the entire global community can join in taking specific actions that, in sum, will result in a stable and sustainable outcome; domestic economic considerations must be as important in the overall effort as are environmental ones; and time frames and plans should guarantee a transition that is smooth but mandates that action begin now.

We are certain that technology is part of the answer for reducing our domestic emissions and improving efficiency as well as for avoiding in the developing world the same “dirty” industrial revolution we’ve experienced. We understand that there are finite resources available for this pursuit and that we’d better spend them wisely, in some sensible strategic manner, from the start.

We have only one chance to properly invest our time, energy, and money. A serious commitment to include regular participation by workers will serve this process well. We can become more energy-efficient plant by plant, institution by institution, and workplace by workplace in this country through worker participation. It would be irrational to pursue solutions that did not start first with the “low-hanging fruit” that is available at every U.S. workplace, perhaps without the need for much investment or expense.

We agree that we need a clear strategy more than a quick fix. We need to honor natural business cycles and long-term investment decisions. We should not spend excessively to meet arbitrary deadlines but rationally to meet national strategic objectives. This is a political problem as much as it is an environmental one. We add our voices to the voices of those who will pursue reasonable strategic solutions that include everybody and who will move this process with some urgency.

DAVID SMITH

Director of Public Policy

AFL-CIO

Washington, D.C.


I know Rob Coppock personally and greatly respect his perspectives on science and policy. I am sympathetic to the logic of the arguments he presents in “Implementing the Kyoto Protocol” in terms of taking a more measured and gradual approach to mitigating greenhouse gas emissions, and I agree that careful, well-considered strategies are more likely to produce better long-term results at less cost.

For the sake of further thought and discussion, however, I would like to raise a philosophical point or two, on which I invite Rob and others to comment. Major pieces of environmental legislation passed in the United States in the early 1970s contained ambitious (perhaps even heroic) targets and timetables for pollution abatement that strike me as being very similar in nature to the provisions of the Kyoto Protocol. You will remember that we were to eliminate the smog plaguing our major cities, make our rivers fishable and swimmable, and so on, all in short order (generally by the mid-1980s, as I recall). Was an awful lot of money spent? Yes. Was money wasted? Most certainly. Were the targets and timetables met? Hardly ever. Were the laws flawed? Yes. (Witness the continuing amendments.) Was it the right thing to do at the time? This is the critical question, and despite all the criticisms of these laws raised over the past quarter century and more, I would still answer, “Yes. Most definitely.”

What those early expressions of public policy such as the Clean Air Act of 1970 and the Water Pollution Control Act of 1972 did was not just reduce pollution (which they indeed in some measure accomplished), they also changed the trajectory of where we were headed as a society, both physically, in terms of discharges and emissions, and mentally, in terms of our attitudes toward the levels of impact on our environment we were willing to accept.

I am much concerned about this same issue of trajectory when it comes to global warming. Greenhouse gas emissions continue their inexorable increase, and every study I read projects growth in energy demand and fossil fuel use in industrialized nations, as well as explosive growth in the developing world. I am concerned that this tide will swamp plans based on otherwise worthy concepts such as “waiting to install new equipment until old equipment has come to the end of its useful life.” (I heard similar arguments made concerning acid rain and other environmental issues, yet the genius of our engineers managed to bless a lot of this old equipment with almost eternal life.)

Sometimes good policy is more than carefully orchestrated and economically optomized plans and strategies. Sometimes there has to be a sense of vision, a “call to arms,” and maybe even seemingly impractical targets and timetables. If climate change is real, now may be one of those times.

MARTIN A. SMITH

Chief Environmental Scientist

Niagara Mohawk Power Corporation

Syracuse, New York


Rob Coppock’s thoughtful article faults the Kyoto Protocol for its emphasis on near-term targets to the exclusion of more fundamental changes that could enable us to ultimately stabilize global concentrations of greenhouse gases. Without some remarkable breakthroughs at this November’s Buenos Aires Conference of the Parties, Coppock envisions that the Kyoto Protocol will prove very costly to implement; will, even if implemented, do relatively little to slow the steady rise in global concentrations of greenhouse gases; and will be unlikely to be ratified by the U.S. Senate.

A more fundamental shortcoming of the Kyoto Conference may have been the failure to create a level playing field for emerging green energy technologies and to provide near-term market incentives to producers of transformational energy systems. Industrialized countries left Kyoto without committing to phase out multibillion-dollar yearly subsidies to domestic fossil fuel industries or to shift the roughly $7 billion of annual direct government investment in energy R&D in OECD countries to provide more than a pittance for renewable energy sources or efficiency. Even in the face of evidence that an energy revolution may be under way as profound as that which between 1890 and 1910 established the current system of grid-based fossil fuel electricity and gasoline-fueled cars, no effort was made to aggregate OECD markets for green energy or, aside from an ill-defined Clean Development Mechanism, to provide inducements for such applications in developing countries. The Clinton administration’s promising proposal to provide about $6.3 billion over five years in tax and spending incentives to promote greenhouse-benign technologies in the United States has foundered in Congress on the grounds that this would be backdoor implementation of a not-yet-ratified protocol.

Even if universally ratified by industrialized countries and fully implemented, the Kyoto Protocol will make only a small dent in the continuing rise in global greenhouse concentrations that is driving climate change. Stabilization of greenhouse concentrations would require about a 60 percent global reduction in CO2 emissions below 1990 levels; even if the Kyoto Protocol is fully implemented, global CO2 emissions are likely to rise, according to a U.S. Energy Information Administration analysis, to 32 percent above 1990 levels by 2010. The radical reductions required for climate stabilization will require a very different model than that established in Kyoto.

Achieving climate stabilization will occur not through global environmental command and control but by emulating the investment strategies of the information and telecommunications revolutions. Some characteristics of emerging green technologies, especially photovoltaics, fuel cells, wind turbines, and micropower plants, could mirror the information technology model of rapid innovation, mass production, descending prices with rising volume, and increased market demand. Major firms such as Enron, BP, and Shell, and governments such as those of Denmark and Costa Rica, have began to glimpse these possibilities. The challenge of future climate negotiations is to develop policies to reinforce this nascent green energy revolution, which may ultimately deliver clean energy at prices lower than those of most fossil fuels.

JOHN C. TOPPING, JR.

President

Climate Institute

Washington, D.C.


Rob Coppock puts his finger on the critical point: Concentrations of greenhouse gases are closely coupled to climate change; emissions are not. It is cumulative emissions over decades that will shape the future concentration of CO2, the principal greenhouse gas.

Finding a way to make sure that global emissions peak and then begin to decline will be a great challenge. Perhaps more daunting still will be the challenge of eventually reducing emissions enough to achieve atmospheric stabilization. Stabilization of CO2 at 550 parts per million by volume (ppmv) means reducing per capita emissions at the end of the 21st century to approximately half a metric ton of carbon per person per year. The trick is to do this while maintaining and improving the standard of living of the developed nations and raising the standard of living of the developing nations. Per capita emissions in the United States are approximately five metric tons of carbon per person per year, and only some of the developing nations are presently able to claim to have emissions at or below one-half metric ton per person per year. Even if the world eventually defines 750 ppmv to be an acceptable concentration, global per capita emissions must be only one metric ton per person per year.

Ultimately, something beyond Kyoto is needed: a strategy to preserve a specific concentration ceiling. The strategy needs two pieces-a policy that will clearly indicate that in the future emissions will peak and decline, and a strategy for delivering technologies that will lower net carbon emissions to the atmosphere. Delivering technologies that will enable humans to provide the energy services needed to give economic prosperity to the entirety of Earth’s populatio of 10 billion or so, while releasing less carbon per capita than at present, will require more than just the best available technologies of today. It will require a commitment to R&D, including the development of technologies to enable the continued growth of fossil fuel use in a carbon-constrained world.

Defining and building a research portfolio whose size and composition will deliver the next generation of energy technologies and lay down the foundations for future technologies is a critical task for the years ahead. It cannot be undertaken by a single agency, firm, or institution acting alone. It requires an international public-private partnership committed for the long term. Several international efforts are beginning to take shape: the Climate Technology Initiative, announced in Kyoto by the United States and Japan; the IEA Greenhouse Gas R&D Programme; and the Global Energy Technology Strategy Project to Address Climate Change. They offer hope real hope that our grandchildren will inherit a prosperous world with limited atmospheric CO2.

JAE EDMONDS

Senior Staff Scientist

Battelle Memorial Institute

Washington, D.C.


Robert M. White and Rob Coppock overstate the difficulty of meeting the Kyoto emission targets. Climate protection is not costly but profitable, because saving fuel costs less than buying fuel. No matter how the climate science turns out or who goes first, climate protection creates not price, pain, and penury, but profits, innovation, and economic opportunity. The challenge is not technologies but market failures.

Even existing technologies can surpass the Kyoto CO2 targets at a profit. For example, contrary to Coppock’s bizarre concept for retrofitting commercial buildings, conventional retrofits coordinated with routine 20-year renovation of large office towers can reduce their energy use by about 75 percent, with greatly improved comfort and productivity. Just retrofitting motor and lighting systems can cheaply cut U.S. electricity use in half.

On the supply side, today’s best co- and trigeneration alone could reduce U.S. CO2 emissions by 23 percent, not counting switching to natural gas or using renewables. All these strategies are widely profitable and rapidly deployable today. In contrast, nuclear fission and fusion would worsen climate change by diverting investment from cheaper options, notably efficient end use.

Just saving energy as quickly as the United States did during 1979-86, when gross domestic product rose 19 percent while energy use fell 6 percent, could by itself achieve the Kyoto goals. But this needn’t require repeating that era’s high energy prices; advanced energy efficiency is earning many firms annual returns of 100 to 200 percent, even at today’s low and falling prices. Rapid savings depend less on price than on ability to respond to it: Seattle’s electric rates are half those of Chicago, yet it is saving electrical loads 12 times faster than Chicago because its utility helps customers find and buy efficiency.

The Kyoto debates about carbon reduction targets are like Congress’s fierce 1990 debates about sulfur reduction targets. What mattered was the trading mechanism used to reward sulfur reductions-the bigger and sooner, the more profitable. Now sulfur reductions are two-fifths ahead of schedule, at about 5 to 10 percent of initial cost projections. Electric rates, feared to soar, fell by one-eighth. The Kyoto Protocol and U.S. climate policy rely on similar best-buys-first emissions trading. But trading carbon will work even better than trading sulfur: It will rely mainly on end-use energy efficiency (which could not be used to bid in sulfur trading), and saving carbon is more profitable than saving sulfur.

Kyoto’s strategic message to business-carbon reductions can make you rich-is already changing business behavior and hence climate politics. Leading chemical, car, semiconductor, and other firms are already behaving as if the treaty were ratified, because they can’t afford to lose the competitive advantage that advanced energy productivity offers. The profit-driven race to an energy-efficient, commercially advantageous, climate-protecting, and sustainable economy is already under way.

AMORY B. LOVINS

Director of Research

Rocky Mountain Institute

Snowmass, Colorado


In my view, not one of the articles on global warming in the Spring 1998 Issues puts this potentially disastrous global problem in meaningful perspective. Robert M. White comes closest with his point that “Only through the development of new and improved energy technologies can reductions in greenhouse gas emissions of the necessary magnitude be achieved.” However, none of the technologies he lists, with one exception, can provide a major solution to the problem.

In the next half century, world energy needs will increase because of increases in world population and living standards. If the projected 9.5 billion world population in 2050 uses an average of only one-third of the per capita energy use in the United States today, world energy needs will triple. The only available energy source that can come close to providing the extra energy required without increasing greenhouse gas emissions is nuclear power. Solar power sounds wonderful, but it would take 50 to 100 square miles of land to produce the same power as one large nuclear or coal plant built on a couple of acres. A similar situation exists with wind power. Fusion too could be wonderful, but who can predict when or if it will become practical.

In view of the coming world energy crunch, we should be working on all of these technologies, and hopefully major advances can be made. But is it responsible to let the public think we can count on unproven technologies? And is it responsible to imply that we can solve the problem by emissions trading or other political approaches, as suggested by Rob Coppock and Byron Swift?

I respect the qualifications of the three authors I refer to above, and I don’t necessarily disagree with the points they make. But in terms of educating and providing perspective to readers of Issues who are not expert in energy issues, the articles do a disservice. In principle, nuclear energy could, over the next 50 years, provide the added world energy needed without greenhouse gas emissions. But in this country nuclear energy is going downhill because the public doesn’t understand its need and value. This situation results from the antinuclear forces, the so-called environmentalists, who have misled the public and our administration. But aren’t we technologists also to blame for not informing the public about the overall problem and its one effective solution?

If we continue on our present course and the greenhouse effect is real, our children and grandchildren who will suffer can look back and blame the anti-nukes. But will they, should they not, also blame us?

BERTRAM WOLFE

Monte Sereno, California

Former vice president of General Electric and head of its nuclear energy division


Patent reform pending

The title of Skip Kaltenheuser’s article “Patent Nonsense” aptly describes its contents (Issues, Spring 1998). Kaltenheuser alleges that a provision to create “prior user rights” would undermine the patent system. Let’s begin by explaining that the concept refers to a defense against a charge of patent infringement. This defense is available only to persons who can prove they made the invention at least one year before the patentee filed a patent application and who also actually used or sold the invention in the United States before the patentee’s filing date.

I should point out that the notion of a prior use defense is not unprecedented. The 1839 Patent Act provided that “every person . . . who has . . . constructed any newly invented . . . composition of matter, prior to the application by the inventor . . . shall be held to possess the right to use . . . the specific . . . composition of matter . . . without liability therefore to the inventor.” Moreover, like H.R. 400 and S. 507, prior use under the 1839 act did not invalidate the patent. Even today there is a form of prior use defense for a prior inventor who has not abandoned, suppressed, or concealed a patented invention, or where the patented invention is in public use or on sale more than one year before the U.S. filing date of the patent.

Fundamentally, a prior use defense is needed because no company, large or small, can afford to patent every invention it makes and then police the patents on a global basis. Where inventions are kept as trade secrets to be used in U.S. manufacturing plants, these inventions are job creators for U.S. workers. However, a risk exists that a later inventor may obtain and enforce a patent that can disrupt the manufacturing process. In almost one-half of such cases, the later inventor will be a foreign patent holder. Kaltenheuser draws on a former patent commissioner’s suggestion of how to avoid this risk: Simply publish all your unpatented manufacturing technology so no one can patent it. I don’t know about you, but publishing this country’s great storehouse of technological know-how without any kind of protection, provisional or otherwise, so that its foreign competitors can use it to compete against U.S. workers in U.S. manufacturing plants doesn’t strike me as a terribly good idea.

That brings me to another point-the prior use defense is a perfectly legal “protectionist” exception to a patent. All of our trading partners use prior user rights for just this purpose. Nearly half of all U.S. patents are owned by foreigners. The prior use defense will mean that these later-filed foreign-owned patents cannot be used to disrupt U.S. manufacturing operations and take U.S. jobs. To qualify for the defense, the prior commercial use must be in the United States; a use in Japan or Europe will not qualify.

Finally, it should be noted that the prior use defense is just that-a defense in a lawsuit. The person claiming to be a prior user must prove this in a court of law, and anyone who alleges such a defense without a reasonable basis will be required to pay the attorney’s fees of the patentee.

There are more safeguards in H.R. 400 and S. 507 than space permits me to cover in this letter, but suffice it to say that there have been 16 days of congressional hearings on all aspects of these bills and two days of debate on the House floor. The legislation is supported by five of the six living patent commissioners and by a vast number of large and small high-tech U.S. companies who rely on the patent system, create jobs in the United States, and contribute to our expanding economy. H.R. 400 and S. 507 will strengthen the patent system and allow us to continue our prosperity in the 21st century.

REPRESENTATIVE HOWARD COBLE

Republican of North Carolina


“Patent policy isn’t a topic that lends itself to the usual sausage-making of Congress.” Skip Kaltenheuser’s concluding statement, coupled with material headlined, “The bill’s bulk obfuscates,” captures the essence of recent convoluted attempts to legislate changes in the U.S. patent system.

The U.S. patent system was designed to enable inventors to disclose their secrets in return for the exclusive right to market their innovation for a period of time. There are many in government, industry, and academia who fail to appreciate this. They do not understand that disclosure helps the economy by putting new ideas in the hands of people who, for a fee to the patent holder, find novel and commercially applicable uses for these ideas. Meanwhile, exclusive use of the innovation by the inventor provides a huge incentive for inventors to keep inventing.

Legislation to change the patent system has been pushed in the past three Congresses by certain big business interests, domestic and foreign. Currently, the Senate is considering S. 507, the Omnibus Patent Act of 1997. This measure is intended to harmonize our patent standards with those of foreign systems. I am opposed to this bill, and its House of Representatives companion bill H.R. 400, because they contain several elements that will damage the innovative process and sacrifice our nation’s status as the global leader in technology-driven commerce. Many Nobel laureates in science and economics agree with me.

I appreciate Kaltenheuser’s perspective that the heart of this proposed legislation is a provision to create prior user rights, which encourage corporations to avoid the patent process altogether. To me, it makes sense that under current law, companies that rely on unpatented trade secrets run the risk that someone else will patent their invention and charge them royalties. What doesn’t make sense is that the Senate and House could consider, much less pass, legislation that would permit companies whose trade secrets are later patented by someone else to continue to market their products without paying royalties. Encouraging companies to hide secrets is the opposite of what is needed in an economy that relies on information.

As Kaltenheuser states, “The more closely one looks at the bill, the more its main thrust appears to be an effort by companies at the top to pull the intellectual property ladder up after them.” I’m certain that the void created will destroy the small inventor, substantially harm small business, and reduce U.S. technological innovation.

We must do all we can to preserve the rights and incentives of individuals, guaranteeing that they have ownership of, and the ability to profit from, their endeavors as the Constitution mandates. We must not rush to drastically alter our tested patent system in ways that would produce unsought, unforeseen, and unwelcome consequences.

REPRESENTATIVE DANA ROHRABACHER

Republican of California


As Skip Kaltenheuser points out in his excellent article, the bill S.507, now in the Senate, will discourage the flow of new inventions that are essential to our country’s advancement of science and technology. The bill’s proponents make arguments for this legislation and its prior user rights provision but cite no real-life cases showing that such a dramatic change in our patent system is necessary.

Considering the damage such legislation would do, as evidenced by the opposition of two dozen Nobel laureates, the Association of American Universities, the Association of University Technology Managers, and the Intellectual Property Law Institute, it is incumbent on those proponents to show an overwhelming need for this legislation. They have not done so. If a company doesn’t want to file a patent application on every minor invention, in order to protect itself against patent applications later filed by others, all it needs to do is publish that invention anywhere in the world in any language.

Under S.507, even after a patent has issued, a large company could, after initiating a reexamination procedure in the Patent Office, appeal a decision made by the examiner to the Board of Appeals in favor of the patent. If the Board of Appeals also decides in favor of the patent, the large company could then appeal to the Court of Appeals in the Federal Circuit. All this extra and unnecessary legal work required of the patentee would cost him or her hundreds of thousands of dollars, so that many laudable inventions would be abandoned or given away to the deep-pocketed adversary. The U.S. patent system should not operate solely in favor of the multinationals, forcing universities, individual inventors, and start-up companies out of the patent system.

DONALD W. BANNER

Washington, D.C.

Former patent commissioner


The patent bill that the U.S. Senate is considering, S.507, modernizes America’s two-century-old patent law to bring it into the information age. The bill is strongly supported by U.S. industry, venture capitalists, educators, and the Patent Office. Opposing modernization are many attorneys who, frankly, benefit from the status quo. The opponents rally support against modernization by characterizing it as a sellout to industry and by making the claim that the laws that built America should not be changed.

The vast majority of America’s inventive genius is not patented. It is kept as trade secrets, and for good reason. A U.S. patent protects an invention only in America. When a U.S. patent is granted and published, that invention can be freely and legally copied anywhere else in the world. In most cases, trade secrets are the only effective way to protect internal manufacturing processes from being copied, and those processes are absolutely critical to maintaining our competitive position in a global economy. Indeed, protecting our trade secrets is why we worry about industrial espionage and why we don’t let competitors, especially foreign competitors, see everything they would like to see in our factories. If we were unable to keep trade secrets, we would be making a free gift of U.S. technology to the rest of the world.

In spite of this obvious truth, America’s right to have trade secrets is under powerful attack by some attorneys. Skip Kaltenheuser advances the “patent it or lose it” theory, which says that because they failed to get patents, the owners of trade secrets should be vulnerable to losing their businesses. This heavily promoted theory is based on the rather far-fetched premise that the primary purpose of patent law is to force inventors and innovators to publish the details of their technology. Under this theory, anyone who invents and fails to publish or patent the invention should lose it. To make this theory work, it is necessary to discard our cherished notion that patents should go only to first inventors. The “patent it or lose it” theory uses patents as an enforcement tool – a kind of prize awarded to people who “expose” the trade secrets of others. It would permit people who are clearly not first inventors to openly and legally patent the trade secrets of others. The new patent owner would then have the right to demand royalties or even shut down the “exposed” trade secret owner. Under this theory, the trade secrets used to make the local microbrew or even Coca Cola could legally be patented by someone else. And, of course, so could the millions of inventions and innovations that are routinely used in U.S. factories.

Existing U.S. patent law contains wording that can be interpreted (I would say misinterpreted) to give support to the “patent it or lose it” argument advanced by Kaltenheuser. The problem lies in section 102(g), which says that an inventor is entitled to a patent unless “before the applicant’s invention thereof, the invention was made in this country by another who had not abandoned, suppressed or concealed it.”

That word “concealed” is the culprit. The intention behind the wording is laudable. It is designed to ensure that inventions are used to benefit the public and that someone’s inventive work which was long buried and forgotten cannot be brought up later to invalidate the patent of another inventor who commercializes the invention and is benefiting the public. What some attorneys are now claiming, however, is that “concealed” should apply to any unpublished invention, without regard to whether or not it is being used to benefit the public. In other words, any inventive trade secret is fair game to be patented by someone who can figure it out.

The best solution to this problem is to do what most other counties have done. They protect their inventors, entrepreneurs, and trade secrets with what they call prior user rights laws. In principle, a prior user right law provides the same kind of grandfather protection that exists in many U.S. laws. It lets businesses keep doing what they were doing even though someone comes along later and somehow manages to get a patent on their trade secret.

Title IV, the “prior domestic commercial use” section of S.507, is a very carefully worded and restricted form of prior user rights. It provides an elegant win-win solution when a patent is granted on someone else’s commercial trade secret. The bill says that if the trade secret user can prove that he was using his technology to benefit the public with goods in commerce and that he was doing these things before the patentee filed his patent, then he may continue his use. The bill contains other restrictions, including the requirement that the trade secret owner be able to prove that he was practicing the technology at least a year before the patentee filed his patent. This simple solution will make many of today’s bitter legal battles over patents unnecessary. Because a prior user need only meet the required proofs, it will no longer be necessary to attack and defend the patent’s validity. The obvious benefit to small business is such that most of the major small business organizations have come out in support of S.507.

S.507 will help stem the astronomical growth in legal fees being paid by U.S. manufacturers to protect their intellectual property. And, pleasesing the Constitutional scholars, S.507 will restore patents to their intended purpose-incentivizing technology and progress, not taking the technology of others.

BILL BUDINGER

Chairman and CEO

Rodel, Inc.

Newark, Delaware


“The bill [S. 507] was designed not for reasoned debate of its multiple features but for obfuscation,” charges Skip Kaltenhauser. Yet the record behind the comprehensive patent reform bill extends back to the 1989 report from the National Research Council and the 1992 report from the Advisory Commission on Patent Law Reform (itself based on 400 public comments), not to mention 80 hours of hearings over three Congresses. To the contrary, this bill is a model of transparency. Does Kaltenhauser not know this history or does he deliberately disregard it? And how can he characterize as “nonsense” a bill supported by every living patent commissioner save one?

Repudiating all the errors and misleading statements in Kaltenhauser’s article would take more space than the original. A partial list of whoppers:

  1. The concluding sentence, “Let’s take time to consider each of the proposed changes separately and deliberately,” carries the false implication that this has not already been done, when demonstrably it has. Each major component was originally introduced as a separate free-standing bill. Kaltenhauser seems to be unaware that the Senate previously passed a prior user rights bill (S. 2272, in 1994). Where was he then?
  2. “There is also a constitutional question. Most legal scholars . . . interpret the . . . provision on patents as intending that the property right be ‘exclusive.'” First, the proposed prior user right (S. 507, Title IV) would create only a fact-specific defense that could be asserted by a trade secret-holding defendant, who would have to meet the burden of proof in establishing that he or she was the first inventor, before the patent holder. This fact-specific defense would no more detract from exclusivity than does the more familiar fact-specific, limited defense of fair use in copyright.

All the supposed consequences of a (nonexistent) general derogation to the patent right therefore simply cannot occur. Moreover, every other major nation already has enacted such a defense, although you would never know that from reading Kaltenhauser. Nor would you learn that the extant right is rarely invoked in litigation-in France and Germany, seven cases each over two decades; in England and Italy, no recorded cases. What the provision does is to replace high-stakes litigation where the only certainty is a harsh result-a death penalty for the trade secret, or occasionally for the patent-with a grandfather clause that leads to licensing as appropriate.

Second, Congress emphatically does have the power to create a general limitation on rights (nonexclusivity) if it so chooses. The Constitution grants a power to Congress, not a right to individuals (a point often misconstrued), and the greater power to create exclusive rights logically implies the lesser power to create nonexclusive rights that reach less far into the economy, as preeminent copyright scholar Melville Nimmer always made clear. Congress first created such nonexclusive copyright rights under the same constitutional power in 1909, and the U.S. Copyright Act today has more such limitations than any other law in the world. Claims of unconstitutionality are frivolous.

The first consideration-that a prior user right is a specific, not general, limitation of rights in the first place, carrying no loss of exclusivity-of course totally disposes of the constitutional objection. Yet the larger bogus claim needs to be demolished, as the charge of unconstitutionality carries emotional freight and will be accepted by the unsuspecting.

  1. “Entities that suppress, conceal or abandon a scientific advance are not entitled to patent or other intellectual property rights. It is the sharing of a trade secret that earns a property right.” Did Kaltenhauser read the Restatement of Torts, the Model Trade Secrets Act, or the Economic Espionage Act? The well-established general rule is that trade secret protection flows to anything that confers a competitive advantage and is not disclosed; and when the proprietor decides to practice the technology (if it is that) internally, no loss of rights applies. Trade secrets are not suspicious; to the contrary, Congress legislated federal protection in 1996, in the face of widespread espionage by foreign governments.

Companies often face difficult decisions as to which remedy to choose-patent or trade secret. According to the late Edwin Mansfield, companies choose patents 66 to 84 percent of the time. When they pass up trying for a patent, they do so for one of two basic reasons. First, to avoid undetectable infringement of inside-the-factory process technologies such as heat treatment. Second, to avoid outrageous foreign patent fees that are designed to make a profit off foreign business.

Faced with the same fees, the bill’s opponents often take a self-contradictory posture, giving up on filing abroad (the only path to get protection anywhere in the world), then bemoaning their lack of protection. The bill’s supporters are working hard to reduce these outlandish fees, thus making it more possible for all U.S. inventors to file for patents abroad.

DAVID PEYTON

Director, Technology Policy

National Association of Manufacturers

Washington, D.C.


Skip Kaltenheuser attacks the proposal to modernize our patent law. On the other hand, virtually all of U.S. industry, almost all former patent commissioners, and many successful U.S. inventors support the Omnibus Patent Act of 1997, S. 507, because it will provide increased intellectual property protection for all inventors and those who put technology to use, whether or not it is the subject of a patent (most U.S. innovators and businesses do not have patents). Our patent law, written two centuries ago, today puts U.S. inventors and industry at a global disadvantage. The modernization bill addresses those problems and also modernizes the patent office so it can keep pace with the rapid development of new technology and resulting patent applications.

Foreign entities now obtain almost half of U.S. patents, and they have the right to stop U.S. innovators from using any of the technology covered by those patents. Patents, no matter how obtained or however badly or broadly written, carry the legal presumption of validity, and challenging them in court can cost millions in legal fees. The modernization bill provides an inexpensive and expert forum (the patent office itself) for adjudicating questions about the validity of inappropriately obtained patents.

Kaltenheuser offers emotional quotes from people he claims oppose the bill. He cites an open letter signed by Nobel laureates. One of the signatories to that letter, Stanford University physics professor Douglas Osherhoff, wrote the Senate Judiciary Committee to say, “my name was placed on that letter contrary to my wishes, and it is my expectation that it [S. 507] will indeed improve upon existing patent regulations.” Similarly, Nobel laureate Paul Berg asked that the opponents of S. 507 stop using his name because he supports the bill: “Indeed, I believe [the Omnibus Patent] bill offers improvements to the procedures for obtaining and protecting American inventions.” And in spite of Kaltenheuser’s claim that the patent bill will dry up venture capital, the National Venture Capital Association supports the bill.

Successful manufacturing depends on confidential proprietary technology-trade secrets. Kaltenhauser’s proposal to eliminate from the bill the prior user defense against patent infringement would continue to punish companies (and individuals) who invest scarce resources to develop technologies independently, do not publish or patent them, and put them to use before others have filed a patent application on these same technologies. Such disincentives to investment and risk-taking are clearly counter to sound economic policy and would allow the ultimate patent recipient to force those innovators and companies to pay royalties on their own independently developed technologies, or even to stop using them altogether. A prior user defense would prevent this. Also, the impact on the patent holder is minimal, since, apart from the entity that successfully asserts the prior user defense, the patent holder can still collect royalties from any other users of the technology.

Kaltenheuser quotes former Patent Commissioner Donald Banner as saying that the only thing companies have to do is publish all their technology, and then it can’t be patented. But why should manufacturers have to publish their trade secrets so their competitors can use them, or be required to get patents just to establish the right to keep using their own innovations?

If we do not reform our patent system and U.S. companies have to publish or patent everything they do, our leading-edge technology and manufacturing will be driven offshore. We need the United States to be a safe place for creating intellectual property and putting it to work. Most foreign countries protect their native technology and industries by allowing trade secrets and prior user rights. We should do the same.

TIMOTHY B. HACKMAN

Director of Public Affairs, Technology

IBM Corporation

Chair

21st Century Patent Coalition

Washington, D.C.


Utility innovation

Like Richard Munson and Tina Kaarsberg (“Unleashing Innovation in Electricity Generation,” Issues, Spring 1998), I too believe that a great deal of innovation can be unleashed by restructuring the electric power industry. Some transformation is likely to occur simply because electricity generators will, for the first time in nearly a century, begin to compete for customers. Much more change can be stimulated, however, if we draft national energy-restructuring legislation that fosters rather than stifles innovation.

In drafting my electricity-restructuring legislation (S.687, The Electric System Public Benefits Act of 1997), I was careful to construct provisions that accomplished the direct goal of emissions reductions while stimulating innovation at the same time. One provision requires that a retail company provide disclosure regarding generation type, emissions data, and the price of its product so that consumers can make intelligent decisions regarding their electric service providers. With verifiable information available, many customers will choose to buy clean power. In fact, firms that are currently marketing green energy in California’s competitive market are banking on the fact that people will opt for green power. This consumer demand is likely to increase production of new supplies of renewable energy, a sustainable, clean product.

Another provision would establish a national public benefits fund, whose revenues would be collected through a non-bypassable, competitively neutral, wire charge on all electricity placed on the grid for sale. Money from this fund would be available to states for R&D and to stimulate innovation in the areas of energy efficiency, demand-side management, and renewable energy.

Yet another provision establishes a cap and trading program for nitrogen oxide emissions. This provision would put in place a single, competitively neutral, nationwide emission standard for all generators that use combustion devices to produce electricity. Currently, older generation facilities do not face the same tough environmental standards as new generation facilities, and the nitrogen oxide emission rates of utilities vary by as much as 300 percent. The older facilities continue to operate largely uncontrolled and thus maintain a cost advantage over their cleaner competitors. With this provision, the older firms would be forced to upgrade to cleaner generation processes or shut down.

The three provisions I outline above are just a selection of the innovation-stimulating measures in my bill, and the measures in my bill are just a selection of the measures included by Munson and Kaarsbergin their insightful article. Congress should carefully consider including many of these proposals when it passes national energy-restructuring legislation.

SENATOR JAMES M. JEFFORDS

Republican of Vermont


Richard Munson and Tina Kaarsberg do a fine job of describing the technological advances in store for us upon electricity deregulation. I think we share the view that no issue is more important than deregulating the electric industry in such a way that technological advances, whether distributed or microgeneration, silicon control of power flows on the grid, or efficiency in fuel burning and heat recapture are given the best possible chance.

But for that very reason, I’m reluctant to praise the drive for mandatory access as the fount of new technology. It isn’t that we need programs like the Public Utilities Regulatory Policy Act and mandatory access to force what amounts to involuntary competition across a seized grid. Instead, government must strive to remove legal impediments to voluntary competition and allow markets to deliver competition on their own, instead of instituting an overarching federally regulated structure to manage transmission and distribution.

In electricity, the primary impediment to competition is not the lack of open access but the local exclusive franchise, usually in the form of state-level requirements that a producer hold a certificate of convenience and necessity in order to offer service.

In a free market, others should have every right to compete with utilities, but how they do so is their problem. But the problem is not insurmountable. (Several Competitive Enterprise Institute publications explore the theme of a free market alternative to mandatory open access; see www.cei.org.)

For reform to foster technological advances fully, the size of the regulated component must shrink, not grow, as it may under open access. Mandatory access can itself discourage the development of some important new technologies by tilting the playing field back toward central generation. As evidence of this, energy consultants are advising clients not to bother with cogeneration because open access is coming; and breakthough R&D on the microturbines we all love is hindered by regulatory uncertainty.

Ultimately, reformers must acknowledge the fundamental problem of mandatory open access: A transmission utility’s desire to control its own property is not compatible with the desire of others to hitch an uninvited ride. No stable regulatory solutions to this problem exist.

I believe the authors would find that the technological advances they anticipate are best ensured not by imposing competition but by removing the artificial impediments to it.

CLYDE WAYNE CREWS, JR.

Director, Competition and Regulation Policy

Competitive Enterprise Institute

Washington, D.C.


Richard Munson and Tina Kaarsberg present a clear vision of where power generation could go if innovation were unleashed and institutional barriers remanded. The electric restructuring now underway in California deals with many of the issues they raise.

The California Energy Commission has been a strong advocate for market economics and consumer choice. We have supported CADER, the California Alliance for Distributed Energy Resources, and we are supporting the largest state-funded public interest research program to spur innovation in the industry. Because the electric industry is highly dependent on technology, I believe that industry players who wish to become leaders will voluntarily invest in R&D to provide consumer satisfaction. Since the start of restructuring, numerous investors have approached the commission with plans to build the new highly efficient, low-emission power plants cited in the article. These facilities will compete in the open market for market share. Although California’s installed capacity is extremely clean, its efficiency needs improvement. As new facilities are constructed, such as one recently completed 240-megawatt facility operating at 64.5 percent efficiency with extremely low air emissions, they will bring competitive and innovative solutions into our market.

Despite all the optimism about new generating facilities, regulatory barriers such as those described by Munson and Kaarsberg continue to inhibit the most innovative approaches, especially those in the area of distributed generation. I strongly support their call for consideration of life-cycle emissions determination and for output-based standards. Too many regulators don’t understand the need to take into account the emissions produced by the system as a whole. In addition, emissions created when equipment is manufactured and fuels are produced are often overlooked. Another area of concern is the repowering of existing sites. Those sites and related transmission corridors have extensive associated investments in infrastructure that may be lost if environmental rules do not allow for rational cleanup and reuse.

Electricity generation and reduced air emissions represent only half of the available opportunities in a restructured industry. The other half is the opportunity for more effective use of electricity. The Electric Power Researach Institute has been successful in developing electrotechnologies that reduce overall energy use and minimize pollution. Armed with consumption data available from recently invented meters and the expanding information available on the Internet, customers can take greater control over how they use electricity. An active marketplace for energy-efficient products is an important goal of California’s restructuring.

California has just begun the profound change contemplated by the authors. Although it is too early to predict the final outcome, it is not too early to declare victory over the status quo. The danger in predicting the outcome of electric industry restructuring is that we will constrain the future by lacking the vision to clearly view the possibilities.

DAVID A. ROHY

Vice Chair

California Energy Commission

Sacramento, California


Genes, patents, and ethics

Mark Sagoff provides a good overview of recent changes in the interpretation of patent law that have permitted genetically modified organisms to come to be considered “inventions” and therefore patentable subject matter (“Patented Genes: An Ethical Appraisal,” Issues, Spring 1998). He also accurately lays out the concerns of religious groups that oppose this reinterpretation on theistic moral grounds. But opposition to the patenting of life is also widespread among secular advocates of the concept of a “biological commons,” supporters of the rights of indigenous peoples to benefit from their particular modes of interaction with the natural world, and scientists and legal scholars who disagree with the rationale for the Supreme Court’s 5-4 decision in Chakrabarty, which at one stroke did away with the nonliving/living distinction in law and opened the way for eventual elimination of the novelty requirement for inventions relating to biomolecules. By not dealing with this opposition, some of which, like the religionist’s concerns, also has a moral basis, Sagoff can represent as “common ground” a formula that would give away the store (large chunks of nature, in this case) to the biotech industry in exchange for its technologists acknowledging that they do not consider themselves to be God (in Sagoff’s words, “not . . . to portray themselves as the authors of life [or] upstage the Creator”). This might be acceptable to most people on the biotech side but not to any but the most legalistic of theists. It would certainly not satisfy the secular critics of patents on life.

Since the 1980 Chakrabarty decision, U.S. law treats genetically modified organisms as “compositions of matter.” This interpretation stems from an earlier opinion by the Court of Customs and Patent Appeals that the microorganism Chakrabarty and his employer General Electric sought to patent was “more akin to inanimate chemical compositions [than to] horses and honeybees, or raspberries and roses.” Thus, a biological solecism that would have raised howls from academic scientists on the boards of all the major biotech corporations, had it been included in a high court opinion relating to the teaching of evolution, was unopposed as the basis for the law of the land when there was money to be made.

Traditions within the world’s cultures, which include but are not limited to the mainstream religions, provide ample basis for resistance to the notion that everything, including living things, is fair game for privatization and transformation into product. Such commodification would inevitably come to include humanoids-the headless clones that were recently discussed approvingly by a British scientist, as well as all manner of “useful” quasi-human outcomes of germline experimentation. The Council for Responsible Genetics, a secular public interest advocacy organization, states in a petition that has already garnered hundreds of signatures that “[t]he plants, animals and microorganisms comprising life on earth are part of the natural world into which we are all born. The conversion of these species, their molecules or parts into corporate property through patent monopolies is counter to the interests of the peoples of this country and of the world. No individual, institution, or corporation should be able to claim ownership over species or varieties of living organisms.”

By ignoring such views, which have worldwide support that has often taken the form of popular resistance to the intellectual property provisions of the biotech industry-sponsored international General Agreement on Tariffs and Trade, and instead describing the major opposition to the industry position as coming from the religious community, Sagoff winds up espousing a framework that would leave in place all but the most trivial affronts to the concept of a noncommodified nature.

STUART A. NEWMAN

Professor of Cell Biology and Anatomy

New York Medical College

Valhalla, New York


Mark Sagoff makes a valiant attempt to reconcile the divergent views of religious leaders and the biotechnology industry regarding gene patenting. Yet his analysis suffers from the same misperceptions that accompanied the original statements from the religious leaders. The foresight of our founding fathers in establishing the right to obtain and enforce a patent is arguably one of the principal factors that has resulted in the United States’ pre-eminence among all the industrial countries. Throughout the 200-plus years of our nation’s history, inventors have been celebrated for the myriad of innovative products that have affected our daily lives. In biotechnology, this has meant the development of important new medicines and vaccines as well as new crop varieties that are improving the sustainability of agriculture.

The question of “ownership” of life has been wrongly stated by the clergy. As representatives from the Patent and Trademark Office have often noted, a patent does not confer ownership; it grants the holder the exclusive right to prevent others from profiting from the invention for a period of 20 years from the time the patent was filed. Second, Sagoff alludes to patents on naturally occurring proteins. The proteins themselves are not the subject of composition-of matter-patents. What is patented is a method of purification from natural sources or through molecular cloning of DNA that will express the protein. Thus, I cannot own a protein that is produced in the human body. I can, however, have a patent on the expression of the protein or on the protein’s use in some therapeutic setting.

Sagoff’s summary of the usefulness of the patent law gives short shrift to the important feature of public disclosure. When a patent is published, it provides a description of the invention to all potential competitors, permitting them the opportunity to improve on the invention. Thus, although the original patent holder does have a period of exclusivity in which to use the invention, publication brings about new inventions based on the original idea. It is fruitless to try to protect new biotechnology inventions as trade secrets because of the large number of researchers in the industrial and academic sectors. Despite the relatively brief history of patents in the biotechnology area, there are countless examples of new inventions based on preceding patents.

Sagoff’s search for common ground leads to proposed legislation modeled on the Plant Variety Protection and Plant Patent Acts (PVPA and PPA). Passed at a time when new plant varieties could be described only by broad phenotypes, these acts were designed to provide some measure of protection for breeders of new plant varieties. Because a plant breeder can now describe the new traits in a plant variety at the molecular level, a patent can be obtained. This offers more complete protection of the invention. Consequently, the PVPA and PPA are infrequently used today.

Sagoff’s proposal is a solution in search of a problem. The case has not been made that under U.S. patent law the issuance of a patent either confers ownership of life or its creation. Changing the law to eliminate use patents for biotechnology inventions would surely cause major uncertainties in companies’ ability to commercialize new discoveries.

ALAN GOLDHAMMER

Executive Director, Technical Affairs

Biotechnology Industry Organization

Washington, D.C.


Sensible fishing

Carl Safina’s “Scorched Earth Fishing” (Issues, Spring 1998) highlights a number of critical issues regarding the conservation of marine systems and the development of management strategies for maintaining sustainable catches from marine fisheries. The present dismal state of many wild fisheries is the result of poor management in three interconnected areas: overfishing, bycatch, and habitat alteration.

A colleague and I just completed a global review of the literature related to the effects of fishing on habitat, to serve as a reference for U.S. federal fisheries managers. Measurable effects on habitat structure, benthic communities, and ecosystem processes were found to be caused by all types of mobile gear. Because little work has been done to assess the effects of fixed-gear harvesting strategies, data are not available to suggest that fixed rather than mobile gear be used. However, common sense tells us that individual units of fishing effort, if transferred from mobile to fixed gear, would reduce the areas affected. Ultimately, it is the frequency and intensity of effects that change marine systems (for example, how many tows of an otter trawl are equivalent to a scallop dredge, how many sets of a gillnet are equivalent to an otter trawl, etc.). Until we have much greater knowledge of how fishing mortality, bycatch, and habitat alteration interact to produce changes in marine ecosystems, precautionary approaches must be instituted in management. Total harvests must be constrained and the areas open to fishing reduced in size. Error must be biased on the side of conservation, not the reverse. I fully concur with the suggestion that we require no-take reserves to serve as barometers of human-caused effects, to allow representative marine communities to interact under natural conditions, and to serve as sources of fish for outside areas. Even here, we are forced to broadly estimate where and how large such no-take areas should be. For many species, we have little to no data on movement rates, sources and sinks for larvae, and habitat requirements for early benthic life stages. Only by adaptively applying precautionary approaches in all three areas of management will we develop the knowledge and wisdom to manage ecosystems for the benefit of both humans and nature.

PETER J. AUSTER

Science Director, National Undersea Research Center

Research Coordinator, Stellwagen Bank National Marine Sanctuary

University of Connecticut at Avery Point

Groton, Connecticut


I write these words while at the helm of my fishing trawler on a trip east of Cape Cod; the fishing is good and getting better as the trip progresses. Carl Safina’s article is on the chart table. Most crew members who see it shake their heads and say nothing. But his unfair condemnation of the sustainable fishing practices used by myself and most other trawlermen cuts deeply and demands a response.

The use of towed bottom-tending nets for harvesting fish from the sea floor is many hundreds of years old. The practice provides the world with the vast majority of landings of fish and shrimp. Bottom trawling is not without its environmental effects, but to simply declare it fishing gear non grata is not sensible. The many species that mobile gear catches–flounder, shrimp, ocean perch, cod, and haddock, to name a few–would virtually disappear from the shelves of stores and the tables of consumers around the world if bottom trawling were stopped.

Safina implies that fishermen could turn to a less-invasive means of catching fish, such as hooks or traps. He also knows that the use of such set or fixed devices is being restricted because it can hook or tangle mammals and birds.

What I find most disappointing about the article is that it does not take into consideration scale. It is the excessive use of high-powered fishing practices of any kind, not just mobile gear, that needs to be examined. The impact of bottom gear is acceptable when it is used in moderation and in high-energy areas that do not suffer from the disturbance that it can cause. This happens to be the vast majority of the fishable sea floor. The proof is fifty fathoms below me as I tow my net across the flat featureless plain of mud, clay, and sand that stretches for miles in every direction. For twenty years I have towed this area. Before me, thousands more towed their nets here for the gray sole, ocean dab, cod, and haddock that we still catch. Although the effects of overfishing have been dramatic, stocks are now improving as recently implemented regulations and improved enforcement are taking hold. The identification of critical habitat that must be protected has begun and will continue. But fishermen should not stop using modern sustainable fishing methods that are sound and efficient just because some scientists don’t understand how our complex gear works.

If towing a net across the seafloor is like “harvesting corn with a bulldozer,” as Safina writes, how is it that we are experiencing an increase in the populations of fish that need a healthy ecosystem to thrive? Bottom trawling in this and most sea floor communities does not lower diversity and does little permanent damage when practiced at sustainable levels, which we in New England waters are currently doing.

BILL AMARU

Member, New England Fishery Managaement Council


Manufacturing extension

In his review of the Manufacturing Extension Partnership (MEP) (“Extending Manufacturing Extension,” Issues, Spring 1998), Philip Shapira does a good job of tracing its origins, its successes, and some of the challenges the system will face in the future. There are two factors, however, that have contributed to the success of MEP and require more consideration.

The first of these is that MEP excels in helping manufacturers become more competitive. This is no accident. The vast amount of knowledge that the industry-trained field specialists have acquired in working with 30,000 companies has led to effective service delivery based on the understanding of how small companies grow. Some of the lessons learned include:

  1. Recognizing that cutting-edge technology is for the most part not the key to success for a business (much to the chagrin of federal labs and other public technology programs). Technology is obviously relevant, but before small companies can use it effectively, they need to be well managed. Technology is simply a tool used to attain a business objective, not an end in itself.
  2. Understanding that not all small firms are equal. The vast majority of small manufacturers are suppliers of parts and components to large companies. and their ability to modernize and grow is to a large extent limited by the requirements of their customers.
  3. Realizing that many small firms want to remain small, and growth is not part of their long-term objectives. Much to the surprise of many people in public life, most individuals start a business because they want to earn a good living for themselves and their families, not because they want to become the next Microsoft.

We have also come to realize that significant contributions to a local economy (in terms of higher-wage jobs) result when a small company becomes a mid-sized company. This generally requires that a company have a proprietary product that is sold directly in the marketplace, not a supplier role. As a result of this understanding of the marketplace, MEP centers and their field agents tailor their strategies for increased competitiveness and growth to the specific needs of the customer.

The second factor is the tremendous job that the National Institute of Standards and Technology has done in creating a federal MEP organization that acts in a most un-Washingtonlike manner. I don’t think that the folks who put this system together have been given the appropriate credit. MEP would not exist without a substantial federal appropriation, but it is equally important to recognize that a national system that includes all 50 states could not have been put together without tremendous leadership, planning, flexibility, and organizational skills. The MEP Gaithersburg organization not only partners with several other federal agencies to bring the most appropriate resources to manufacturers at the local level, but it is collaborating with state affiliates to create a vision and a strategic plan for the national system. Moreover, it is holding these same affiliates accountable by using results-oriented standards applied in industry.

Pretty cool for a federal agency, don’t you think ?

JACQUES KOPPEL

President

Minnesota Technology, Inc.

Minneapolis, Minnesota


Emissions trading

Byron Swift’s article (“The Least-Cost Way to Control Climate Change,” Issues, Summer 1998) on the potential uses of emissions trading to implement the Kyoto agreement cost effectively has the right diagnosis but the wrong prescription. Emissions trading certainly can reduce the overall cost of achieving environmental goals. But Swift’s fealty to the “cap and trade” emission trading model blinds him to the real issues involved in developing a workable trading system for greenhouse gases.

The potential of emission trading is becoming universally accepted. Trading is a money saver. It also provides a stimulus to technological innovation, which is the key to solving the global warming problem. Emissions trading also establishes empirically a real cost for emission reductions. This can eliminate the most nettlesome problem of environmental policymaking for the past 25 years: the perennial argument between industrialists who say an environmental program will be too expensive and environmentalists who say it is affordable. Because this kind of argument becomes untenable when prices are known, trading offers the potential to replace some of the heat with light in environmental policy debates.

Unfortunately, however, when Swift looks at how to implement such a program for global warming, he gets tangled up being an advocate for the cap and trade approach, which has been successful in addressing sulfur dioxide emissions, which cause acid rain. But acid rain is a special case, because fewer than 2,000 large, highly regulated, and easily measured sources account for more than 80 percent of all sulfur dioxide emissions in the United States. This is very different from the problem of greenhouse gases, which are emitted from millions of sources of all different sizes and characteristics.

Swift advocates a “cap and trade” system for CO2 emissions from U.S. electric power generating stations. But power generators account for only about one-third of U.S. CO2 emissions. He admits that such a system might not work for the other two-thirds of U.S. emissions. As for the 75 percent of greenhouse gas emissions produced outside the United States, he is forced to admit that “strengthening the basic institutional and judicial framework for environmental law may be necessary,” a project that he acknowledges with remarkable understatement “could take considerable investment and many years.” In the end, then, Swift’s cap and trade approach seems workable for less than one-tenth of the world’s greenhouse gas emissions.

There is also a more subtle problem with limiting the market to those players whose emissions are easily measured. Although the cap and trade system will stimulate technological innovation in the source categories included in the system, it will ignore possibilities for innovation and cost savings outside the system.

The “open trading” approach is more promising. Swift’s claim that open market trades must be verified by governments in advance is incorrect. With regard to market integrity, analogous commercial markets long ago developed many mechanisms to assure honesty in transactions. These include third party broker guarantees, insurance, and independent audits. If government involvement were desired, existing or new government or quasi-governmental agencies could be created to invest in measures that would reduce greenhouse gases and to sell the resulting greenhouse gas credits.

Swift’s article, like much of the discussion on this subject so far, puts the cart before the horse. The big political issue in creating a market system for greenhouse gases is achieving a fair, politically acceptable allocation of the rights to emit. And the big design issue is balancing the conflicting desires for the economic efficiency of the broadest possible market with verifiability. A cap and trade system modeled after the U.S. acid rain program does not satisfactorily address either of these issues in the context of global greenhouse emissions and therefore appears a poor candidate to achieve substantial reductions in greenhouse gases soon. As Swift admits, the rest of the world has neither the capacity nor the inclination to adopt such a system, and in any case the commodity emissions that are addressed by such a system represent only a small fraction of the global greenhouse emissions.

Open market trading fits the real world better than cap-and-trade systems. Of course, neither will work in the absence of government limits on greenhouse gases and a commitment to adequate enforcement. But the right trading system is a powerful tool to stimulate new and more economically efficient means of achieving greenhouse goals.

RICHARD E. AYRES

Washington, D.C.


Immigration of scientists and engineers

Since 1987, the science work force has grown at three times the rate of the general labor supply. To compound the hiring squeeze, the 1990 Immigration Reform Act resulted in a tripling of job-based visas, with scientists representing nearly one-third of the total. Immigration and the subsequent production of Ph.D.s with temporary visas, especially in the physical sciences and engineering, have clearly been a challenge to the science and technology (S&T) system of the 1990s. But to consider immigration apart from other human resource issues might solve one public policy problem but exacerbate others.

We have often discussed human resource development with our colleagues Alan Fechter and Michael S. Teitelbaum. Their “A Fresh Approach to Immigration” (Issues, Spring 1997) captures well the policy choices that have to be made. There is no federal human resource policy for S&T. The federal government, through fellowships, traineeships, and assistantships, invests in the preparation of graduate students who aspire to join the science and engineering work force. No rational planning, however, shapes the selection criteria, the form of student support, the number of degrees awarded, or the deployment of those supported. The composition of the U.S. science and engineering work force reflects a combination of agency missions, national campaigns (such as the National Defense Education Act of 1958), and wild swings in demand stratified by region of the nation, sector of the economy, and industry needs.

Add to this the changing demographics of the student population, with increasing numbers from groups historically underrepresented in S&T, and this could be a defining moment for the future vitality of U.S. research and education in science and engineering. Although women and minorities have made dramatic gains in a number of S&T fields over the past three decades, their representation as recipients of doctoral degrees in most science and engineering fields is still far below their representation in the U.S. population at large, in those graduating from high school, or in any other denominator one prefers. A policy is surely needed. Addressing immigration would be a necessary part of that policy.

Fechter and Teitelbaum suggest that a balanced panel of experts propose separate immigration ceilings for scientists and engineers based on how the ceilings are affecting our national R&D enterprise, including the attractiveness of careers in science and engineering for our citizens. We would expand the panel’s focus to consider not only the attractiveness and accessibility of such careers to U.S. citizens in general but also the extent to which the availability of the world’s talent leads us to ignore the development of our native-born talent.

The proposed panel, they say, would operate under the aegis of the White House Office of Science and Technology Policy, with input from the Department of Labor, the Immigration and Naturalization Service, and federal agencies such as NSF, NIH, and NASA. If one favored a nongovernmental host for such a panel, the National Academy of Sciences could provide a forum for discussion and advice on the full range of human resource issues. We do not fear duplication of effort; rather the contrary.

One might ask why not just take advantage of available immigrant talent rather than pursue the sometimes more painstaking work of developing talent among historically underrepresented native-born groups? We would argue that it is imperative to cultivate a science community that is representative of our citizenry. It is equally imperative to produce research that is responsive to citizen needs and, in turn, generates political support. There is a delicate balance to strike between welcoming all and depending on other nations to populate our graduate schools and future science and engineering work force..

Such dilemmas were often debated with our friend Alan Fechter. His passing robs us all of an analytical voice. With this article, he and Teitelbaum ensured that debate to inform human resource policy and practice will continue.

DARYL E. CHUBIN

SHIRLEY M. MALCOM

CATHERINE D. GADDY

Commission on Professionals in Science and Technology

Washington, D.C.

Bioethics for Everyone

Arthur Caplan is the Babe Ruth of medical ethics. He looks like the Babe—a healthy, affable, stout man, who enjoys life, and is universally liked. Like the Babe, he is prodigiously productive, with a curriculum vitae that must have more pages than many academics have publications. To most Americans, he is the best known player in the field and has done more than anyone to make bioethics a household word. Just as the Yankees built a giant stadium to house the Babe’s activities, the University of Pennsylvania established a major program to lure Caplan from the University of Minnesota. He does not wear a number on his back, but if he did, it would surely be retired when he steps down.

There are other parallels between Caplan and the Babe. Just as the Babe could do many things well (he was a record-breaking pitcher before he decided to concentrate on hitting), Caplan can write and talk to the ivory tower academic and the layman with equal ease. He is the undisputed master of the sound bite, but he is also a well-trained philosopher of science who has written finely argued analytic articles. He has also done empirical work, joining with social scientists to define the facts that are essential to making responsible policy.

Like most popularizers of complex subjects, Caplan is often criticized by experts in his field. Some of this is mere envy. Some of the criticism, however, is based on a more serious concern about the role of the ethicist—an ill-defined title—in public policy and public education. There are legitimate questions about the nature or even existence of expertise in ethics. (Caplan, in fact, has written one of the better essays on this subject.) The worst suspicion is that ethicists are little more than sophists, spinmeisters whose major expertise is in articulating clever arguments to support their personal views, which are ultimately as subjective as anyone else’s.

Although it is self-serving to say so, I think that there is more to ethics than that, particularly in the case of bioethics. For those who want to take bioethical issues seriously, whether for personal or policy reasons, there are better and worse ways of making decisions. One of the better ways is to know the relevant facts, have a full appreciation of the competing interests, have a clear understanding of opposing points of view, and be able to support one’s decision with arguments that are at least understandable to others. Caplan’s writings offer major assistance to those who share these goals.

This book is a collection of essays and articles, some previously published, most appearing for the first time, on a wide array of current controversies in ethical issues in health care, from Auschwitz to the XYY syndrome. At an average length of 11 pages, these are more than sound bites but something less than analytic philosophy. They are a useful starting point for the educated person who wants an accessible entree to thinking about questions such as: “What, if anything, is wrong with transplanting organs from animals to humans?” “What rules should govern the dissemination of useful data obtained by immoral experiments?” “Are there moral issues in fetal research that should trouble those who are liberal on abortion?”

As with most controversies in bioethics, intelligent consideration of these questions requires knowledge or skills from several disciplines, including medicine, law, analytic philosophy, and social sciences. Part of Caplan’s success as an educator is his firm grasp of all these elements and his ability to collect critical thinking from experts in those fields and weave them into a coherent essay. He also succeeds because of his extensive experience among clinicians, philosophers, lawyers, and policymakers. He is fully capable of being an academic, whether writing a finely reasoned argument in a scholarly ethics journal or joining with social scientists in explicating public attitudes on important policy issues. Although some of these essays are adapted from leading academic journals, the intended audience is not primarily scholars but the general reader, health professional, or policymaker who wants an introduction to a particular issue.

For every person who is suspicious of the glib, opinionated ethicist, there is someone else who is weary of the “two-handed” ethicist who presents all the relevant facts and arguments but scrupulously avoids judgment. Not to worry. Although Caplan is capable of presenting both sides of an argument, his own views are almost always in plain view. He is at his most enjoyable when he is most confident about the rightness of his position. In a piece about Dr. Jack Kevorkian and assisted suicide, he raises the familiar point that a major cause of Kevorkian’s success is the failure of physicians to offer adequate pain management. The point has been made by many but rarely as pungently as this: “The failure has nothing to do with a lack of general knowledge about pain control. It has to do with inadequate training, callous indifference to patient requests for relief, and culpable stupidity about addiction.”

Naturally, it is when he is most assertive (for example, “I believe that any form of compensation for cadaver organs and tissues is immoral”) that scholars will be most critical of his failure to present the issue in sufficient nuance or depth. It is impossible for anyone as prolific as Caplan to be completely consistent. One of his arguments against incentives for organ donation is that “No factual support has been advanced for the hypothesis that payment will increase cadaver donation.” Critics might point out that the policy he is most associated with—a federal law requiring that relatives be asked if they want to donate organs when it is medically appropriate—became national policy without much evidence that it would achieve the desired result. To his credit, Caplan has acknowledged the disappointing results of this policy elsewhere.

The essays vary along the spectrum from one-handed opinion to two-handed balance. Both can be effective. Sometimes it is the highly opinionated teacher who stimulates the most thinking, provoking the student to come up with counter-arguments. When Caplan is at his most opinionated, the clarity and pungency of his writing provide a useful template for organizing one’s own thinking. On the other hand, perhaps the most impressive essay in this collection is a short but lucid discussion of the increasing skepticism among some scholars regarding the definition of death. This is an extremely complicated issue, highly susceptible to opaque metaphysical discourse as well as oversimplification. Caplan does a remarkable job of explaining the issues in a balanced way, conveying the complexity, and offering a sensible political justification for the status quo.

Another reason for Caplan’s appeal is that he is not easily pigeon-holed into traditional political categories. He is sometimes libertarian, believing, for example, that competent people should have considerable latitude to use the new reproductive technologies; but there is a paternalistic element to his insistence on more regulation of infertility clinics. His views are generally centrist and reflect an attempt to find the middle ground. He argues against publication of results that were obtained from immoral experiments but supports the use of ill-gotten data that is already in the public domain, provided certain additional requirements are met, including disclosure that the information was obtained immorally. He is troubled about the spread of assisted suicide but understands that “legalization may be a good even for those who choose not to take this path. The mere fact that the opportunity for help in dying exists may help some persons to endure more disability or dysfunction than they otherwise might have been willing to face.”

An educated friend with no background in medicine, law, or ethics asked me to recommend a book that would introduce him to rational discourse in bioethics without putting him to sleep. I can recommend this book to such a novice at the same time as I find it informative and thought-provoking for someone such as myself who has spent most of his adult life thinking about these issues.

From the Hill – Summer 1998

Space station woes infuriate Congress

Cost and schedule overruns for the international space station program are increasingly exasperating members of Congress, even those who have fought long and hard to support the program. At a March hearing before the House Committee on Science Subcommittee on Space and Aeronautics, Science Committee Chairman Rep. F. James Sensenbrenner, Jr. (R-Wisc.) compared the space station to the Titanic: “The Titanic struck a single iceberg, with tragic consequences. The space station seems to be careening from one to the next, none of which has been big enough to sink the program.” Added Rep. Dana Rohrabacher (R-Calif.), the subcommittee chairman, “I don’t know how much more of the international space station program we can stand.”

Much of the concern expressed at the hearing centered on the station’s increasing cost overruns. During the FY 1998 budget authorization process, NASA notified Congress that the station program would require an additional $430 million in funding authority because of Russia’s inability to provide the station’s service module on schedule; increased costs incurred by Boeing, the prime contractor; and the need for funding to avoid “future risks and unforeseen problems.” Congress, however, approved only $230 million of the request, taking money from the space shuttle budget and increasing overall appropriations. NASA now faces the task of convincing Congress that the remaining $200 million should still be approved.

To assist NASA, the president, in his FY 1998 emergency/nonemergency supplemental appropriations request, asked Congress to provide $173 million in transfer authority to NASA. But the request was met coolly by members of the Space and Aeronautics Subcommittee, because the money would have to come out of accounts for funding space science, earth science, aeronautics, and mission support. Rep. Ralph Hall (D-Tex.) asked why appropriators should even bother funding science accounts if NASA was eventually going to transfer the money to another program.

Because of its various problems, the space station will now require an additional 18 months to complete and $4 billion in extra funding (some of which will be paid by Russia). The new overall price tag, which includes only construction costs, is about $19.6 billion, said Joseph Rothenberg, NASA’s associate administrator for space flight, at the hearing.

Members of the Senate have also told NASA that they will not tolerate any further cost and schedule problems. On March 12, the Senate Commerce, Science, and Transportation Committee passed a NASA reauthorization bill that limits space station construction spending to $21.9 billion.

Congress takes a hard look at health research priority setting

Science funding, particularly for biomedical research at the National Institutes of Health (NIH), is expected to increase significantly during the next few years. But a larger pie is still a limited pie, and research money for some diseases will increase more than money for others. With many groups vying for disease-specific funding, a continuing debate over how research priorities are set at NIH is intensifying. The 105th Congress, which has held several hearings on priority setting, has directed the Institute of Medicine (IOM) to assess the criteria and process that NIH uses to determine funding for disease research, the mechanisms for public input into the process, and the impact of statutory directives on research funding decisions. The IOM committee is expected to issue its report this summer.

The priority-setting process is complex and multitiered, possessing formal and informal components. In balancing the health needs of the nation with available scientific opportunities, criteria such as disease prevalence, number of deaths, extent of disability, and economic costs are weighed against technological developments and scientific breakthroughs. To find this balance, NIH relies on extramural scientists, professional societies, patient organizations, voluntary health associations, Congress, the administration, government agencies, and NIH staff. Accomplished investigators evaluate grant applications for merit. National advisory councils consisting of interested members of the public and the scientific and medical communities review policy. Outside experts, Congress, patient groups, the Office of Management and Budget, and other groups and agencies recommend budgetary and programmatic improvements. The final word on research programs, however, lies with the NIH director and the directors of the individual institutes.

Philip M. Smith, former executive officer of the National Research Council, has praised the current process as “pretty well right,” and the Federation of Behavioral, Psychological, and Cognitive Sciences has said that the current structure provides “many avenues of influence.” However, others are concerned that it lacks a mechanism for public input. Instead of pursuing NIH channels, many groups seeking increased research funding on specific diseases appeal directly to Congress.

Congress has the power to earmark funds for particular research areas, a process that groups such as the National Breast Cancer Coalition believe is essential for maintaining public input. But many members of Congress are not comfortable with appropriating dollars on a political rather than a scientific basis. At a March 26 hearing held by the House Commerce Committee’s Subcommittee on Health and Environment, Rep. John Porter (R-Ill.) said that if Congress consistently followed the advice of the loudest and most persistent advocacy groups, limited research dollars would be monopolized, leaving countless scientific opportunities unfunded. Porter, who chairs the appropriations subcommittee that funds NIH, recognizes the authority that Congress has to earmark but strongly opposes moving one disease ahead of another politically. “It would be a terrible mistake,” he said, agreeing with NIH officials who stress the importance of leaving research spending priorities to scientists.

Government’s role in research studied

Most economists and science policy experts agree that the federal government’s role in funding basic research is irreplaceable. However, as the R&D process has become more complex during the past half-century, the line between research that generates broad benefits and research that primarily benefits private industry has become blurred. At an April 22 hearing, the House Science Committee heard various views on the appropriate roles of government and industry in funding research, as well as appropriate mechanisms for transferring new knowledge to the private sector. The hearing was the sixth held as part of the House’s National Science Policy Study, headed by Rep. Vernon J. Ehlers (R-Mich.), which is revisiting the landmark 1945 Vannevar Bush report that established the federal government as the primary source of funds for basic scientific research. The Ehlers study was expected to be submitted to the Science Committee by the end of June.

Claude E. Barfield of the American Enterprise Institute said he estimates that one-half to two-thirds of economic growth can be attributed to technology advances and that a solid basic research effort funded largely by the federal government underpins these advances. However, he pointed out that the federal government has limited resources and oversteps its role when it supports precompetitive commercial technology development, such as the Commerce Department’s Advanced Technology Program. George Conrades of the Committee for Economic Development agreed, stating that the development and commercialization of technologies is a private sector function, except where funding serves broader government missions such as defense.

However, Conrades said that most private basic research is designed to fill gaps in broader applied research programs aimed at developing new products. Because of this commercial orientation, industry will never make sufficient investments in basic research. In 1997, of the more than $130 billion that industry spent on R&D, less than 10 percent was for basic research. And industry’s investment is only one-quarter of the total U.S. basic research effort, according to a recent report by the American Association for the Advancement of Science.

Although many members of Congress are critical of federal support for commercial projects, they recognize that states, with their more direct ties to industry, have a different role. William J. Todd, president of the Georgia Research Alliance, argued that his corporation, which was created by Georgia businesses, is one of the best examples of effective public-private partnerships. The alliance relies on the federal government to support basic research through competitively awarded grants to Georgia’s universities. This research then forms the basis of new discoveries and innovation, benefiting the government, business, and universities.

“Compromise” bill on encryption introduced

In the latest legislative attempt to deal with the controversial issue of encryption policy, Sen. John Ashcroft (R-Mo.) and Sen. Patrick Leahy (D-Vt.) introduced on May 12 what they call the E-PRIVACY Act (S. 2067). The bill would liberalize current restrictions on exports of encryption technology, but it also includes some law enforcement-friendly provisions, resulting in what its supporters say is a compromise.

The bill would allow continued access by U.S. citizens to strong encryption tools and would bar any requirement that users give a key to their data to a third party. (The administration and law enforcement agencies have insisted that access to encrypted data is essential for national security and for effectively prosecuting criminals.) It would alter current export policies by allowing license exceptions for encryption products that are already generally available, after a one-time review by the Department of Commerce.

The bill would also establish a National Electronic Technology (NET) Center within the Justice Department to help law enforcement authorities around the country share resources and information about encryption and other computer technologies. The NET Center would help officials with appropriate warrants gain access to encrypted data.

The Ashcroft-Leahy bill joins two bills that have thus far dominated the encryption policy debate in Congress. H.R. 695, introduced by Rep. Bob Goodlatte (R-Va.), and S. 909, introduced by Sen. John McCain (R-Ariz.) and Bob Kerrey (D-Neb.), would eliminate the current cap on the power and sophistication of encryption exports. Instead, they would allow the government to approve exports based on the level of sophistication generally available abroad. The bills would also prohibit the government from forcing domestic encryption users to hand over copies of their keys to the data to a centralized government-sanctioned authority.

Software producers have led the charge against export restrictions, arguing that they damage U.S. competitiveness because strong encryption products are available internationally anyway. Advocates for privacy and free speech are also aligned against the administration position, arguing that Americans are entitled to unregulated use of encrypted communication. A powerful new coalition of software businesses and online advocacy groups called Americans for Computer Privacy was launched early in March and is now spearheading a campaign to liberalize encryption controls. Scientists also have a stake in this debate, because current encryption restrictions limit the ability of computer scientists studying cryptography to publish their findings.

On March 17, the Senate Constitution, Federalism, and Property Rights Subcommittee listened to testimony about the constitutionality of encryption regulations. One of the witnesses was Cindy Cohn, lead counsel for Bernstein v. the Department of Justice, et al. For six years, Daniel Bernstein, a computer scientist, has been trying to publish an encryption program that he wrote on the Internet, a violation of current U.S. policy. Arguing that his free speech had been violated, Bernstein took his case to court. According to Cohn, a federal district court in the Northern District of California ruled that “every single one of the current (and previous) regulations of encryption software are unconstitutional.”

Cohn said that the current legislative proposals regarding encryption do not address the issues raised by the Bernstein case. H.R. 695, for instance, “does not clearly protect scientists such as Professor Bernstein but only protects those who seek to distribute mass market software already available abroad. This means that American scientists can no longer participate in the ongoing international development of this vital and important area of science.” The new E-PRIVACY bill has been criticized for the same reason. Other witnesses outlined similar concerns, noting that framers of the U.S. Constitution regularly enciphered their correspondence, using techniques that led to modern digital encryption. The sole administration witness at the hearing, Robert S. Litt of the Department of Justice, in referring to the Benstein case, argued that “a restriction on the dissemination of certain encryption products could be constitutional” even if the products are being distributed for educational or scientific purposes.

New concerns about national security

Nuclear weapons tests by India and Pakistan and the possible leakage of sensitive satellite technology to China have once again focused Congress’s attention on national security issues. Soon after India’s nuclear tests were announced in May, Senate leaders pressed for a vote to force the administration to deploy a national missile defense system as soon as it is technologically feasible. Senate conservatives have been pushing for early deployment for several years, but the administration has resisted. The proposal, however, was defeated 59 to 41.

Meanwhile, the House turned its attention to the topic of technology transfer, after reports surfaced that critical technical knowledge may have been transferred to Chinese authorities when U.S. satellite makers launched their systems on China’s Long March vehicles. Concern that China might be able to apply such knowledge to improve its own missile capabilities led the House to overwhelmingly approve a ban on any further launches of U.S. satellites by the Chinese.

House approves database bill

On May 19, the House of Representatives passed the Collections of Information Antipiracy Act (H.R. 2652), introduced by Rep. Howard Coble (R-NC). The bill would strengthen copyright protection for database publishers.

Database producers have long been calling for legislation to prevent others from electronically copying their data, repackaging it, and selling it. However, some members of the science and education communities are concerned that the Coble bill is too broad and might unduly restrict access to valuable scientific data.

Rep. George Brown (D-Calif.), ranking minority member of the House Science Committee, was the only member of Congress to speak out against the bill when it was brought to the floor. “The problem is that the bill has not found yet a proper balance between protecting original investments in databases and the economic and social cost of unduly restricting and discouraging downstream application of these databases, particularly in regard to uses for basic research or education,” Brown said.

Coble and Judiciary Committee ranking member Rep. Barney Frank (D-Mass.), however, argue that the bill fills a gap in current U.S. copyright law while still addressing the concerns of the research and education communities. “We make a distinction here in this bill between commercial use of someone else’s property and the intellectual use. If people think we have not done the balance perfectly, I would be willing to listen, but they do not want to come forward with specifics,” Frank said. Earlier in the session, the bill was amended to make employees and agents of nonprofit and educational institutions exempt from criminal liability if they violate the proposed law.

Resolving the Paradox of Environmental Protection

The next big breakthrough in environmental management is likely to be a series of small breakthroughs. Capitol Hill may be paralyzed by a substantive and political impasse, but throughout the United States, state and local governments, businesses, community groups, private associations, and the Environmental Protection Agency (EPA) itself are experimenting with new ways to achieve their goals for the environment. These experiments are diverse and largely uncoordinated, yet they illustrate a convergence of ideas from practitioners, think tanks, and academia about ways to improve environmental management.

One hallmark of the management experiments is an increased emphasis on achieving measurable environmental results. A second hallmark is a shift away from the prescriptive regulatory approaches that allowed EPA or a state to tell a company or community how to manage major sources of pollution. The experimental approaches still hold companies and communities accountable for achieving specified results but encourage them to innovate to find their own best ways to meet society’s expectations for their total operations. The experiments share a third hallmark: They encourage citizens, companies, and government agencies to learn how to make better environmental decisions over time.

EPA needs a regulatory program that is both nationally consistent and individually responsive to states, ommunities, and companies.

EPA is initiating some of those changes, as well as responding to initiatives taken by state and local governments, groups, and companies. A report published by the National Academy of Public Administration (NAPA) in September 1997, entitled Resolving the Paradox of Environmental Protection: An Agenda for Congress, EPA, and the States, identified and analyzed some of the most significant environmental initiatives under way in the United States, including EPA’s Project XL pilots, state efforts to encourage businesses to learn about and correct their environmental problems, and the implementation of the National Environmental Performance Partnership System (NEPPS) with the states. The report also focused on the challenge of developing performance indicators and an environmental information system that could support the new management approaches.

The increased emphasis on performance-based management responds to two social goals: increasing the cost-effectiveness of pollution controls and ensuring that the quality of the nation’s environment continues to improve. In the past, EPA and its state counterparts could exercise authority without much concern for the bluntness of their regulatory tools. Over time, the cost of many end-of-the-pipe pollution controls rose faster than the benefits they produced, so environmental improvement began to look too expensive. Now, however, the public expects agencies to strive for more cost-effective and less disruptive approaches.

EPA, state environmental agencies, and the regulated community need to accelerate the shift to performance-based protection, because several environmental problems are likely to become more serious and more expensive to manage in traditional ways. Chief among those problems are emissions of greenhouse gases, which may produce global climate change; polluted runoff from farms, urban streets, and lawns; the deposition of persistent organic pollutants and metals from the air into water bodies; and the destruction or degradation of critical natural habitats, including wetlands. Continued economic growth in the United States and in the developing world will also increase certain types of environmental stresses, particularly those caused by consumption of fossil energy.

EPA could not manage most of these problems through traditional means for three reasons. First, these problems arise from disparate sources that are so small and numerous that traditional end-of-the-pipe pollution controls often are neither technically feasible nor politically acceptable solutions. Second, the problems often require action by more than one EPA program, and this is difficult under EPA’s “stovepiped” statutes and organization. Third, many of the problem-causing activities are within the legal spheres of state and local governments or of federal agencies other than EPA.

One of the most serious threats to rivers, lakes, and estuaries, for example, is the nutrients flowing directly from huge new feeding operations for hogs, chickens, and turkeys, and indirectly from farm fields where animal wastes are spread as fertilizer. EPA recently proposed that it begin regulating the largest feeding operations on the same basis as factories and municipal sewage plants. This is an important step, but addressing runoff from smaller feedlots and from farm fields will require technical assistance, economic incentives, and coordinated action under agricultural and environmental statutes, as the states of Maryland and North Carolina discovered after their nutrient-rich waters spawned outbreaks of Pfiesteria, a toxic microorganism that killed fish and sickened humans.

Fortunately, many of the new approaches that will allow the nation to manage its remaining environmental problems will also help improve the cost-effectiveness of environmental protection overall.

A paradox and an imperative

EPA’s central challenge is to learn to maintain and improve a regulatory program that is both nationally consistent and individually responsive to the particular needs of each state, community, and company. That paradox can be resolved only if the agency and Congress continue to adopt performance-based tools. These include information management systems, market-based controls, compliance-assurance strategies, regulations that encourage firms to choose among compliance strategies, and new partnerships with states and businesses. Each of these approaches creates incentives for regulated parties to improve their overall environmental performance without specifying how they should do so. The tools are more flexible and more challenging than traditional command-and-control regulations, because they encourage innovation by rewarding those who find the least expensive ways to achieve public goals. Performance-based tools can either augment or replace traditional regulatory approaches. They encourage experimentation and learning, and they reward individuals, firms, and public managers who develop and use environmental and economic data. The most promising of the tools will foster an integrated approach to environmental protection, one that looks at air quality, water quality, ecosystem health, human health, and other social values as a whole.

Much has changed in the 30 years since the United States instituted national pollution-control programs. Americans have become more sophisticated about environmental problems and have supported the broad development and distribution of environmental professionals throughout federal agencies, state governments, local governments, and nongovernmental advocacy groups. Congress and EPA helped create that dispersed management capacity through their policies of delegating federal programs to the states. Indeed, the nation now relies on state and local agencies to do most of the work of writing permits, finding and prosecuting violators, and communicating with the public about environmental conditions. In addition, technological advances have made remote sensing and continuous emissions monitoring possible for certain types of factories and environmental conditions, effectively automating the role of the environmental inspector. The proper incentives could speed the further development and use of advanced monitoring technologies in coming years.

These changes make it possible for EPA and the states to expand their use of less prescriptive tools to achieve public goals. In a 1995 report, Setting Priorities, Getting Results: A New Direction for the Environmental Protection Agency, a NAPA panel stressed the importance of building more flexibility into the regulatory system to address problems more effectively and keep the costs of environmental protection from rising. The academy urged the administration to continue to develop its Common Sense Initiative, which aims to customize regulations and incentives for specific industries, and to seek legislative authorization for a program to grant firms and communities flexibility if they do more than just comply with existing requirements. The academy urged EPA to find ways to integrate its management of air pollution, water pollution, and waste management, thus allowing individual firms, communities, industrial sectors, or states the opportunity to find efficiencies by taking a holistic approach to problem solving.

EPA, state environmental agencies, and the regulated community need to accelerate the shift to performance-based protection.

EPA pursued many of the academy’s recommendations in the regulatory reinvention program it announced in the spring of 1995. The agency has not sought congressional authorization for most of these programs, however. Instead, EPA has attempted to maximize the flexibility within its statutes and to manage its interactions with the public and the regulated community more effectively.

Creating options and accountability

Three environmental innovations-EPA’s Project XL, Minnesota’s self-audit strategy, and NEPPS-illustrate how the new management approaches attempt to create new options for regulated entities while also ensuring accountability to the public.

The letters in XL are a loose acronym for environmental “excellence” and corporate “leadership,” the two qualities the project was designed to unite. As originally promoted, Project XL would allow responsible companies and communities to replace EPA’s administrative and regulatory requirements with their own alternatives. Through as many as 50 facility agreements, Project XL would help demonstrate which innovative approaches could produce superior environmental performance at lower costs.

Although few XL agreements have yet come to fruition, those that have suggest that the goals of the initiative are well founded. Individual facilities have been able to find smarter ways to reduce their environmental impact than they would have achieved by merely complying with all of the existing air, water, and waste regulations. Weyerhaeuser, for example, reached an agreement with the state of Georgia and EPA that removes a requirement that a company paper mill invest in a new piece of air pollution control equipment and adds a commitment by the company to reduce bleach-plant effluent to the Flint River by 50 percent, improve forest management practices on 300,000 acres to protect wildlife, and reduce nonpoint runoff into watersheds. The Intel Corporation reached an agreement allowing a manufacturing facility outside Phoenix, Arizona, to change its production processes without the customary prior approval, provided that the plant keeps its air pollutants below a capped level and provides a detailed, consolidated environmental report to the community every quarter. The XL agreement allows Intel to innovate more rapidly than it otherwise could, and that has considerable value in the computer industry.

Relatively few companies have followed these leads, because XL proposals have often been mired in controversy and uncertainty. EPA insists that companies demonstrate that their proposals will achieve “environmental performance that is superior to what would be achieved through compliance with current and reasonably anticipated future regulation.” That test inevitably requires a degree of judgment that cannot be quantified. In the Weyerhaeuser case, for example, there is no way to prove that the improved land management practices will offset any environmental damage caused by the company’s break on installing air pollution control equipment. Because EPA lacks clear statutory authority to make such judgments, EPA managers have been very conservative about the proposals they accept. The fear of citizen suits has inhibited companies from proposing XL projects as well.

Intel’s executives decided to take a conservative approach in their proposal, avoiding any actions that would violate state or federal environmental standards or require any waiver from enforcement agencies. They feared that even if EPA blessed an XL package and promised not to enforce the letter of the law, they would be liable to lawsuits from citizens. One reason EPA stressed the importance of stakeholder participation in the original XL proposal was to reduce the likelihood of such suits. Presumably, participants in the negotiations would conclude that the final agreements were in the public interest and thus refrain from suing. To date, none of the agreements has been challenged in court.

At the time of this writing, EPA officials continue to assert that they can make Project XL work under existing statutory authority, but the legal underpinnings of the pilot projects have changed. Rather than promising to waive enforcement, EPA now adopts site-specific rules to cover the most complex projects. That is, the agency issues a rule under existing federal statutes that applies only to one site. Before issuing a rule, EPA determines that the statutes provide a legal justification for the rule. Although it eliminates the problem of firms being held liable for “breaking” laws, EPA’s solution creates another dilemma-setting precedents and raising questions of equity. If Intel’s emissions cap meets the requirements of the Clean Air Act, then why shouldn’t identical permits be legal for other minor sources of air pollution?

If EPA had clear statutory authority to approve more dramatic experiments, firms would be more likely to propose them. Certainty is important to firms if they are to put their reputations on the line while investing in a public negotiation.

Exploiting the power of information

The value that companies place on their reputations has created other opportunities for EPA and states to experiment with new approaches to achieving environmental protection. Companies’ response to the creation of the Toxic Release Inventory (TRI) demonstrated that merely publishing information about firms’ emissions rates could lead many firms to reduce those emissions. TRI seems to have worked because firms wanted to avoid being on the high end of the list and because it forcefully brought emissions rates to the attention of executives who previously had noted only that they were in compliance with regulations.

Various federal and state programs, including one managed by the Minnesota Pollution Control Agency, have begun to use similar information-based tools. In Minnesota, companies or municipalities that discover, report, and fix environmental violations are often able to avoid the fines or penalties that might have been imposed had a state inspector found the problems. A 1995 state law authorized this approach to encourage firms and municipalities to conduct self-inspections or third-party environmental audits. (Minnesota does not grant these firms a right to evidentiary privilege or immunity as some states, including Texas and Colorado, have. EPA has pushed those states and others to rescind privilege and immunity statutes because unscrupulous firms could use them to avoid penalties for deliberate violations of environmental regulations.) Participating Minnesota companies receive a “green star” from the state. Thus, the statute provides companies with a new management option that stresses accountability over penalties. Small businesses have been exercising that option rather than face inspections by state officials. The result appears to be broader compliance among businesses that had historically operated below the state’s radar.

On a grander scale, the International Standards Organization (ISO) has developed ISO 14001 standards for corporate environmental management systems. ISO-“certified” firms and organizations maintain that the voluntary process delivers real environmental improvements, usually as a byproduct of the attention it focuses on materials use and waste management. EPA’s Environmental Leadership Program, a reinvention initiative of the Office of Enforcement and Compliance Assurance, has been encouraging firms to adopt ISO 14001 or similar environmental management systems. Some EPA officials and industry experts have speculated that ISO-certified firms might qualify for expedited permitting, looser reporting standards, or other incentives that would encourage and reward voluntary commitment to careful environmental management. However, because ISO 14001 is neither an enforceable code nor suitable for most small businesses, it is not a panacea.

These information-based tools establish incentives for improved performance while also making the public and private environmental management system better informed and thus better able to make performance-enhancing decisions. They are dynamic in ways that traditional end-of-the-pipe technology standards generally are not.

New opportunities for states

In perhaps its boldest reinvention experiment, EPA signed an agreement with the states in 1995 that created NEPPS, which attempts to establish more effective, efficient, and flexible relationships between EPA and state environmental management agencies.

Before NEPPS, the air, water, and waste division managers in EPA’s regional offices would sign individual agreements with their state counterparts spelling out how much federal money the state programs would receive and specifying requirements such as how many inspections state employees would have to conduct and how many permits they would have to issue. Throughout the 1980s and early 1990s, state commissioners grew increasingly frustrated with these agreements, because they tended to focus on bureaucratic activities rather than environmental results and because they were the vehicles EPA used to allocate its numerous categorical grants to specific activities. NEPPS has begun to structure EPA-state agreements around efforts to address specific environmental problems. State commissioners now may negotiate a single comprehensive agreement with the agency and pool much of the federal grant money that used to be categorically defined. EPA and the states are attempting to develop sets of performance measures that will keep the agencies’ attention on the environment rather than on staff activities.

After almost three years of implementation, some 40 states are participating in the new system at some level. Some states are attempting to use the process of negotiating a performance partnership agreement as a vehicle for increasing public involvement in priority setting. The New Jersey Department of Environmental Quality, for example, is investing in developing indicators of environmental conditions and trends that will provide useful information to environmental professionals and the lay public. Nevertheless, NEPPS is still in its infancy. The real test of its effectiveness will come when states, EPA, and the public must decide what to do if the core performance measures show little progress. NEPPS will work only if the states and the public are interested enough and EPA is resolute enough to insist on better performance.

Until Congress reforms itself and its systems, the promise of a fully integrated environmental program will not be met.

Meanwhile, the demands of EPA’s own enforcement office and inspector general have tended to reinforce the old ways of doing business and discourage risk taking, just as the threat of citizen suits has discouraged XL agreements. Some states are still not interested in NEPPS, perceiving it as a waste of energy as long as EPA still requires them to submit information on the old bureaucratic measures and as long as EPA holds onto its traditional oversight tools: the right to bring enforcement actions in states and to remove delegated programs from a state’s control.

If it can be successfully implemented, NEPPS will be the perfect complement to the ultimate reinvention experiment endorsed by Congress: the Government Performance and Results Act (GPRA) of 1993. GPRA requires EPA and all other federal agencies to supply Congress with a strategic plan, a set of measurable goals and objectives, and periodic reports on how well the agency is making progress toward those objectives. The NEPPS agreements could provide the foundation of such an effort.

Needed: Better data

The key to success for all the performance-based systems described above is for EPA, the states, and the public to have access to an extensive base of reliable authoritative information about environmental conditions and trends. EPA’s information systems are not yet adequate to meet that challenge.

Technological advances are beginning to make it possible for agencies to collect, manipulate, and display far richer and more extensive information about environmental conditions. It is becoming cheaper and easier to measure emissions and environmental conditions remotely as well as automatically. Increasingly, firms and states can submit reports electronically, making it possible for all environmental stakeholders to have quick and easy access to environmental information.

Even so, technology’s promise to dramatically improve decisionmakers’ access to information about environmental conditions and trends has not yet been realized. Despite large public and private investments in environmental monitoring and reporting, the nation does not have a comprehensive and credible environmental data system. That deficiency makes it difficult for decisionmakers and the public to answer basic questions about the effectiveness of environmental regulatory programs. The problem has several components: The data available to EPA are incomplete, fragmented among different program offices and their databases, and often unreliable. And there are more basic gaps in scientific understanding of environmental problems, their causes, and their consequences. EPA has struggled for years to address these information problems, and it is not yet clear that the agency or Congress has put in place a program that will soon produce objective, credible, and useful environmental statistics.

Congress must play

Taken as a whole, EPA’s reinvention initiatives are moving the nation’s environmental management system in a positive direction. To date, however, those initiatives have operated only at the margins of EPA’s core programs and will continue to be of only marginal importance unless Congress and the agency strengthen their commitment to experimentation and change. The states’ actions are broadening the base for reinvention and making many of the tools of performance-based management familiar to business managers, regulators, and the general public. As that base broadens, the impasse at the federal level will probably dissolve.

EPA’s underlying structural problems, its authorizing statutes, and the fragmentation of congressional committees with a role in environmental issues all remain barriers to effective multimedia action and performance-based management. The agency’s media offices still do the bulk of the day-to-day business and still focus on chemical-by-chemical, source-by-source regulation. State agencies, professional networks, funding channels, advocacy groups, and congressional committees have replicated that structure, creating enormous structural inertia. One product of that inertia is inefficiency. Even if every one of EPA’s regulations made perfect sense by itself, they could not add up to the ideal environmental management regime for different kinds of facilities operating in different geographical settings with different population densities and weather conditions. The nation’s physical, economic, and political conditions are too varied for the old regulatory approaches to fit well across the nation. A focus on performance will improve the application of those approaches, but ultimately EPA needs a more effective way to address problems and facilities holistically, as Project XL is striving to do. Every EPA administrator has struggled with those problems. Eventually, Congress will need to help resolve them.

EPA has not acted on two of the major recommendations in Setting Priorities, Getting Results: producing a comprehensive reorganization plan to break down the walls between the media offices and developing a comprehensive integrating statute for congressional action. One reason for the lack of progress has been the fierce party partisanship on Capitol Hill. Although it is not clear when the political climate will be more conducive to progress on such a difficult task, the academy’s recommendations for changes will remain relevant and important. To better integrate policymaking across program lines, EPA should study the effects that reorganization has had on its regional offices and the states they serve as well as the reorganizations that several state environmental agencies have undertaken. The New Jersey Department of Environmental Protection, for example, has integrated its permitting systems, which may suggest lessons for EPA.

Another of the most politically challenging recommendations in the 1995 report remains untouched: Congress has not consolidated its committees that have roles in environmental oversight. That continued fragmentation of responsibility in Congress takes its toll on EPA-and on the environment itself-by reinforcing fragmented approaches in the agency. Until Congress reforms itself and its systems, the promise of a fully integrated environmental program will not be met.

EPA has tried numerous strategies in the past few years to overcome some of the challenges created by its patchwork of authorizing statutes. Significant progress, however, will require statutory reform. By beginning a gradual legislative process to integrate EPA’s authorities, Congress would encourage EPA to seek the most efficient ways possible to improve the nation’s environment. It is important to restate the obvious: The nation’s environmental statutes, and EPA’s implementation of them in partnership with the states, have accomplished great environmental gains that benefit all Americans and strengthen the nation’s future. It is also obvious that the nation needs to do more to improve the quality of the environment-domestically and globally-and to find better ways to do that work.

Congress should lead that change by working with EPA to develop an integrating statute-a bill that would leave existing statutes essentially intact while beginning a process to harmonize their inconsistencies and encourage integrated environmental management. The integrating statute should be more modest, less threatening, and hence more pragmatic than a truly unified statute. The bill should accomplish the following five objectives:

  1. Congress should articulate its broad expectations for EPA in the form of a mission statement.
  2. Congress should direct EPA to integrate its statutory and regulatory requirements for environmental reporting, monitoring, and record keeping. This effort should eliminate redundant or unnecessary reporting requirements, fill reporting or monitoring gaps where they exist, and establish consistent data standards. This would make the information more useful to public and private managers, regulators, and the public.
  3. Congress should direct EPA to conduct a series of pilot projects to fully test the ideas that inspired Project XL. The statute should authorize EPA to use considerable discretion to develop model projects for multimedia regulation, pollution reduction, inspections, enforcement, and third-party certification of environmental management systems. The goals of such pilot projects should be to develop the most productive ways to achieve environmental improvements on a large scale. Thus, some of the pilots might test the potential for future multimedia regulation of specific sectors, or opportunities for interrelated businesses and communities to achieve their environmental and social goals through totally unconventional means requiring more freedom to innovate than the statutes currently permit.
  4. The statute should affirm that Congress authorizes and encourages EPA to use market-based mechanisms such as trading systems to address pollutants, including nonpoint pollutants, when the agency believes they would be appropriate.
  5. Congress should direct EPA to support a series of independent evaluations of the pilot projects and other activities that it authorizes under the statute. EPA should also provide biennial reports to Congress that include analysis of its accomplishments and barriers to accomplishment, as well as recommendations for congressional action.

Adopting such a statute would have substantive and symbolic value. Substantively, the statute would authorize changes that should enhance the nation’s ability to make new environmental improvements at the lowest possible cost. By authorizing experiments in multimedia management, for example, the statute should encourage innovations that would reduce nonpoint pollution or ecosystem damage. Symbolically, the statute would settle the debate within EPA and the regulated community about whether integrated performance-based protection is important, appropriate, or legal.

In the months that have elapsed since NAPA published its report, it has become clear that the passage of environmental legislation of almost any kind is highly unlikely within the next year or two. Two bills have sparked some interest, though neither is gaining much momentum. A bill sponsored by Sen. Joseph Lieberman (D-Conn.) would authorize XL-type projects. Though it resisted any such legislation for a time, EPA is giving it some support. However, detailed procedural requirements in the bill leave business unenthusiastic while failing to overcome the skepticism of environmental advocates who have resisted XL from the start. The “Regulatory Improvement Act of 1997,” also known as the Thompson-Levin Bill after its sponsors, Sens. Fred Thompson (R-Tenn.) and Carl Levin (D-Mich.), would require federal agencies, including EPA, to conduct regulatory analyses, including a cost-benefit analysis of major regulations when issuing them. Although it boasts bipartisan support, the bill appears to be mired in the stalemate that emerged around risk assessment and cost-benefit analysis in the 104th Congress in 1995.

Nevertheless, the concepts sketched out here are becoming widely accepted in the states and among pragmatic policy advocates. If Congress continues to take GPRA seriously and if EPA and the states continue to take NEPPS seriously, there will be a demand for more and better indicators of environmental performance and trends. That in turn should help government agencies adopt the most effective tools for managing environmental problems.

Making protection automatic

The 18th-century economist Adam Smith showed how the “invisible hand” of free markets could foster innovation, competitive pricing, and economic growth. Two hundred years later, Garrett Hardin showed how the invisible hand could also produce the “tragedy of the commons”-the depletion of shared resources in the absence of a collective decision to manage them for the public good. Paradoxically, a combination of market forces and public actions can help the nation achieve its environmental goals. The United States needs to keep making collective decisions to protect and restore the environment for the public good and the well-being of future generations. To the maximum extent possible, however, the nation should attempt to employ invisible hands-the creative energy of millions of decisionmakers pursuing their self-interest-to achieve the nation’s environmental goals.

EPA, states, localities, and the regulated community need to develop more comprehensive, comprehensible, and useful measures of environmental conditions and trends. The increase in public understanding of the environment and environmental risks over the past four decades has motivated the incredible progress the nation has made in reducing pollution levels and restoring environments. But the public will need a deeper understanding if it is to make the increasingly sophisticated judgments needed for continued improvements at reasonable costs. EPA’s efforts to develop performance-based management tools will help the public participate more fully in managing the environment. Credible information about environmental performance, public policies that harness market forces, and public pressure-the expectation of a private commitment to the public welfare-may ultimately be enough to keep most businesses and communities operating on a track of continuous environmental improvement.

Love Canal revisited

From August 1978 to May 1980, the nondescript industrial city of Niagara Falls, New York, named for one of the world’s great scenic wonders, acquired a perverse new identity as the site of one of the 20th century’s most highly publicized environmental disasters: Love Canal. It was the first, and in many ways the worst, example of a scenario that soon reproduced itself in many parts of the country. Toxic chemicals had leaked from an abandoned canal used as a waste dump into nearby lots and homes, whose residents seemed more than averagely afflicted by a wide variety of health problems, from miscarriages and birth defects to neurological and psychological disorders. State and federal officials tried desperately to assess the seriousness of the danger to public health, hampered by a lack of reliable scientific data and inadequately tested study protocols. Controversies soon erupted, and panicky homeowners turned to politicians, the news media, and the courts for answers that science seemed unable to deliver. The crisis ended with a combined state and federal buyout of hundreds of homes within a several-block radius of the canal and the relocation of residents at an estimated cost of $300 million.

The name Love Canal has entered the lexicon of modern environmentalism as a virtual synonym for chemical pollution caused by negligent waste management. The episode left a lasting imprint on U.S. policy in the form of the 1980 federal Superfund law, which mandated hugely expensive cleanups of hazardous waste sites around the nation. Community-based environmental activism also took root at Love Canal, following the model pioneered by the local homeowners’ association and its charismatic leader Lois Gibbs. A question left tantalizingly in the air was whether, in times of heightened public anxiety, it is possible for public health officials to undertake credible scientific inquiry, let alone whether such inquiry has the power to inform policy decisions. Much has been written on this subject, including Adeline Levine’s Love Canal: Science, Politics, and People (Lexington Books, 1982), an early sociological account of the controversy.

Why, then, do we need another book about Love Canal now, 20 years after the event burst on our national consciousness? Allan Mazur, a policy analyst at Syracuse University, answers by drawing an analogy between his book and Rashomon, the classic film by Akira Kurosawa, which has come to symbolize the irreducible ambiguity of human perceptions and relationships. In the film, the story of a rape and murder is retold four times from the viewpoints of the four principal characters: a samurai, his wife, a bandit, and a passing woodcutter. The story of Love Canal, Mazur argues, involved similar discrepancies of vision, so that what you saw depended on where you stood in the controversy. But whereas the artist Kurosawa was content to leave ambiguity unresolved, the analyst Mazur is determined to reconcile his conflicting accounts so as to offer readers something akin to objective truth. No unbiased reading was possible, he implies, as long as the principals in the Love Canal drama were propagating their interest-driven accounts of what had happened and who was to blame. Now, at a remove of 20 years, Mazur is confident that his disinterested academic’s eye, liberated from “strong favoritism,” will allow us to glimpse a reality that could not previously be seen.

In pursuing the truth, Mazur imitates Kurosawa’s narrative strategy, but the resemblance turns out to be skin-deep. The book begins with six accounts of the events from 1978 to 1980, representing the viewpoints of the Hooker Chemical Company (the polluter); the Niagara Falls School Board (negligent purchaser or Hooker’s innocent dupe); two groups of homeowners who were compensated and relocated on different dates; the New York State Department of Health; and Michael Brown, the hometown reporter who broke the story and later wrote a bestseller about it. In the second part, Mazur, like Kurosawa’s woodcutter, emerges from behind the scenes to give us his rendition of the events. But whereas the woodcutter was just one more voice in Rashomon, Mazur claims something closer to 20/20 hindsight. Evaluating in turn the news coverage, the financial settlements, and the scientific evidence, he even-handedly declares that there is enough blame to go around among all the parties involved. His impatience with Lois Gibbs, however, is palpable, and he holds the news media responsible for succumbing too easily to her story, which he finds least credible despite its later canonical status.

Lost in a time warp

At 218 pages plus a brief appendix, Mazur’s version of the Love Canal story is refreshingly brief. This is all to the good, because it is hard to read this book without feeling that one is caught in a time warp. The analytic resources that the book deploys seem almost as dated as the events themselves. For instance, the list of references, drawing heavily on the author’s own prior work, shows very little awareness of the fact that scientific controversies have emerged over the past 20 years as a major focal point for research in science and technology studies and in work on risk. In Mazur’s world, therefore, all is still as innocently black and white as it seemed to be in the 1960s: Either a chemical has caused a disease or it has not; either experts are doing good science or they are not; either people are unbiased or they are interested.

In such a world, disagreements occur because people with interests distort or manipulate the facts to suit their convenience. Reason, common sense, and good science would ordinarily carry the day were it not for political activists such as Lois Gibbs who muddy the waters with their “gratuitous generation of fear and venomous refusal to communicate civilly.” Powerful news organizations are unduly swayed by “articulate and sympathetic private citizens, often photogenic homemakers, who are fearful about contamination that threatens or has damaged their families.”

There are hints here and there that the author is aware of greater complexity beneath the surface, but his failure to acknowledge nearly two decades of social science research prevents him from achieving a deeper understanding. In his commitment to some idealized vision of “good science,” for example, Mazur loses sight of the fact that standards for judging science are often in flux and may be contested even within the scientific community. Against the evidence of a mass of work on the history, philosophy, and sociology of science, he asserts that there are clear and unambiguous standards of goodness governing such issues as the use of controls, the design of population studies, the conduct of surveys, and the statistical interpretation of results. Not surprisingly, Mazur concludes that the homeowners’ most effective scientific ally, Dr. Beverly Paigen, failed to meet the applicable standards. The data-collecting efforts of nonexpert “homemakers,” such as Gibbs, are dismissed with even less ceremony.

None of this is very helpful in explaining the profoundly unsettling questions about trust and credibility that Love Canal helped bring to the forefront of public awareness. A firm grasp of constructivist ideas about knowledge creation would have helped, but Mazur evidently knows only a straw-man version of social construction that strips it of any analytic utility. Instead of using constructivism as a tool for understanding how knowledge and belief systems attain robustness, Mazur dismisses this analytic approach as mindlessly relativistic. There is an unlovely smugness in his assertion that constructivists would “take it for granted that the Indians’ account [of the Battle of Little Big Horn] is no more or less valid than the army’s account.” He tilts again at imaginary windmills a page later, writing that “Few things can be proved absolutely to everyone’s satisfaction. There is a possibility that we are all figments of a butterfly’s wing; I can’t disprove it, but I don’t care.”

One could read Mazur’s accounts of the parties’ positions in the Love Canal debate as an attempt at social history, but here again one would be disappointed. The presentation draws largely on a limited number of sources, usually in the form of first-person narratives or interviews; and even these are not always adequately referenced, as in the case of the 1978 source from which most of Lois Gibbs’s story is drawn. The effort to provide multiple perspectives on the same events often leads to unnecessary, almost verbatim repetition, as with a statement by Health Commissioner David Axelrod that is quoted on p. 98 and again on p. 169. The book will remain a useful (though not perhaps totally reliable) compendium of things people said during the controversy. There are occasional wonderful touches, as when Gibbs describes the homeowners’ appropriation of expert status at a 1979 meeting with Axelrod. All the residents who attended “wore blue ribbons symbolic of Axelrod’s secret expert panel”; Paigen’s said that she was an expert on “useless housewives’ data.” These are the ingredients with which a gripping history may someday be fashioned by a storyteller with a different agenda.

The book’s inspiration, one has to conclude, is ultimately more forensic than academic. Unlike Kurosawa’s all-too-human actors, Mazur’s institutional participants have the character of parties to a staged lawsuit, offering their briefs to the court of reconciled accounts. Mazur himself seems to relish the role of judge, able to cast a cold eye on others’ heated accounts and to sort fact from fancy. But common-law courts have always been reluctant to do their fact-finding on the basis of records that have grown too old. People forget, move away, or die, as indeed did happen in the case of David Axelrod, a remarkable public servant whom Mazur aptly characterizes as the tragic hero of Love Canal. Documents disappear. New narratives intervene, adding confusion to an already-cacophonous story. In a court of law, a rejudging of responsibility for Love Canal would have been barred by a statute of limitations. History, to be sure, admits no such restriction, but Mazur, alas, is no historian.

Finally, it is interesting to observe that recent policymaking bodies have been, if anything, more charitable toward citizen perceptions and participation than the author of this book. In 1997, for example, the Presidential/Congressional Commission on Risk Assessment and Risk Management recommended that risk decisions should engage stakeholders at every stage of the proceedings. Similar recommendations have come from committees of the National Research Council. Impersonal policymaking bodies, it appears, can learn from experience. Is it unreasonable to expect more from academic social scientists, who have, after all, more leisure to reflect on what gives human lives meaning?

Computers Can Accelerate Productivity Growth

Conventional wisdom argues that rapid change in information technology over the past 20 years represents a paradigm shift, one perhaps as important as that caused by the electric dynamo near the turn of the century. The world market for information technology grew at nearly twice the rate of world gross domestic product (GDP) between 1987 and 1994, so the computer revolution is clearly a global phenomenon.

Yet measured productivity growth has been sluggish in the midst of this worldwide technology boom. In the United States, for example, annual labor productivity growth (defined as output per hour of work) actually fell, from 3.4 percent between 1948 and 1973 to 1.2 percent between 1979 and 1997. For the period 1979 to 1994, total factor productivity (TFP) growth (defined as output per unit of all production inputs) also fell substantially, from 2.2 percent to 0.3 percent per year. In light of the belief that computers have fundamentally improved the production process, this is particularly puzzling. As Nobel laureate Robert M. Solow has observed, “You can see the computer age everywhere but in the productivity statistics.”

Economists have long argued that both output and productivity are poorly measured in service sectors.

Detailed analysis of the U.S. economy suggests that computers have had an impact, but it is necessary to look beyond the economy-wide numbers in order to find it. New technology affects each business sector differently. For most sectors, the computer revolution is mainly a story of substitution. Companies respond to the declining price of computers by investing in them rather than in more expensive inputs such as labor, materials, or other forms of capital. The eight sectors that use computers most intensively, for example, added computers at a rate of nearly 20 percent per year from 1973 to 1991, whereas labor hours grew less than 3 percent per year. This capital deepening (defined as providing employees with more capital to work with) dramatically increased the relative labor productivity of the computer-using sectors, those with more than 4 percent of total capital input in the form of computers in 1991.

Before 1973, labor productivity in the manufacturing sectors that invested heavily in computers grew only 2.8 percent per year, compared with 3.1 percent for those that did not. After they accumulated computers so rapidly in the 1970s and 1980s, however, labor productivity growth jumped to 5.7 percent per year for 1990-1996 in the computer-using sectors in manufacturing, but declined to 2.6 percent per year for the other manufacturing sectors. Comparison of the relative performance of these sectors over time shows that computers are playing an important role in determining labor productivity.

Computer-related productivity gains in the manufacturing sectors also suggest that measurement errors have been a large obstacle to understanding the economy-wide impact of computers on productivity. Computer investment is highly concentrated in service sectors, but in those sectors there is no clear evidence of the dramatic productivity gains found in manufacturing. Economists, however, have long argued that both output and productivity are poorly measured in service sectors. If one conjectures that the true impact of computers is approximately the same in both manufacturing and services, these results imply an increasing understatement of output and productivity growth in the service sectors.

The computer-producing sector reveals yet another way in which the computer revolution affects economy-wide productivity growth. This sector experienced extraordinary TFP growth of nearly 3 percent per year in the 1980s, reflecting the enormous technological progress that enabled computer companies to churn out superior computers at lower and lower prices. This one sector, despite being relatively small in terms of private GDP (less than 3 percent), was responsible for one-third of TFP growth for the entire U.S. economy in the 1980s.

Moving beyond aggregate data

Computers have experienced dramatic price declines and extraordinary investment growth in the past two decades. The price of computer investment in the United States decreased at the remarkable rate of more than 17 percent per year between 1975 and 1996, whereas the broader investment category of which computers are a part, producers’ durable equipment (PDE), increased more than 2 percent per year. At the same time, and mostly in response to rapid price declines, business undertook a massive investment in computers. Starting near zero in 1975, the computer share in real PDE investment in the United States increased to more than 27 percent by 1996. With cumulative investment in new computer equipment near $500 billion for the 1990s, U.S. companies have clearly embraced the computer. Countries across the globe are also rapidly accumulating computers. Between 1987 and 1994, growth in the information technology market exceeded GDP growth in 21 out of the 24 member countries of the Organization for Economic Cooperation and Development (OECD). These figures present a compelling view of the depth and breadth of the computer revolution. From Main Street to Wall Street, computers appear everywhere, and computer chips themselves can also be found inside automobiles, telephones, and television sets.

Yet aggregate productivity growth remains flat by historic standards. And services-which are the most computer-intensive sectors–show the slowest productivity growth. This apparent inconsistency is at the heart of the computer productivity paradox.

Any attempt to explore this paradox, however, must move beyond the economy-wide data on which it is based. The aggregate data hide many illuminating details. For most companies, computers are a production input they invest in, just like new assembly lines, buildings, or employee training. Not all companies use computers the same way, however. Nor can all companies benefit from computer investment. These important differences are lost in the economy-wide data. Furthermore, computers are also an output from a particular manufacturing sector.

To explore these differences, the U.S. economy was divided into 34 private sectors and ranked according to their use of computers. Eight of these sectors use computers intensively–more than 4 percent of their capital was in the form of computers in 1991–and were labeled computer-using sectors. As shown in Table 1, these eight sectors accounted for 63 percent of total value added and 88 percent of all computer capital input in 1991.

Computers are highly concentrated within three service sectors-trade; finance, insurance, and real estate (FIRE ) and “other services,” which includes business and personal services such as software, health care, and legal services–that account for more than 75 percent of all computer inputs. In manufacturing, only five of 21 sectors used computers intensively enough to be labeled computer-using; they accounted for less than 40 percent of total manufacturing output in 1991.

Computers are not everywhere

This wide variation in computer use is evident in recent surveys of adoption of computer-based technologies. For example, in 1993 a staggering 25 percent of all manufacturing plants surveyed by the U.S. Census Bureau used none of 17 advanced technologies. Moreover, patterns of adoption varied greatly by industry and technology. In fact, very few of the surveyed technologies showed use rates greater than 50 percent, and many (particularly lasers, robots, and automated material sensors, all of which depend on computers) were used by fewer than 10 percent of surveyed plants. The most prevalent technologies are computer-aided design and numerically controlled machine systems. Virtually identical surveys in Canada and Australia confirm the diversity reported by U.S. manufacturers.

OECD surveys also show that computers are highly concentrated in specific sectors. In Canada, France, Japan, and the United Kingdom, for example, information and communication equipment is steadily increasing its share of total investment and is much more highly concentrated in the service sectors. In 1993, OECD estimates indicate that the service sector contained nearly 50 percent of all embodied information technology for the seven major industrial nations and that this capital was concentrated primarily in finance, insurance, services, and trade.

More specific data from France and Germany suggest that computers are becoming universal in some industries. Nearly 90 percent of all workers in the French bank and insurance industry used a personal computer or computer terminal in 1993. This proportion is up from 69 percent in 1987 and substantially exceeds the 30 to 40 percent in French manufacturing industries. In Germany, nearly 90 percent of surveyed companies in the service sector report that computers are important in their innovation activities.

Although computers may appear to be everywhere, they are actually highly concentrated in the service sectors and in only a few manufacturing sectors.

When the price of an input falls, companies respond by substituting the cheaper input for more expensive ones. With the enormous price declines in computers, one would expect to see companies substitute less expensive computers for relatively expensive labor and other inputs. For example, companies might replace labor-intensive written records with computer-intensive electronic records. Detailed analysis of the U.S. sectoral data suggests that this is exactly what happened. The eight computer-using sectors invested in computers rapidly and substituted them for other inputs. From 1973 to 1991, these eight sectors report annual growth in real computer input in excess of 17 percent, with seven out of eight above 20 percent (see Table 1).

When compared with the growth rates of labor and output in these sectors, the swift accumulation of computers appears even more striking. In contrast to the phenomenal growth rates of computer capital, labor hours declined in three sectors and experienced growth rates above 3 percent in only two. Similarly, output growth ranged from -0.4 percent to 4.7 percent per year. Moreover, substituting computers for other inputs is not limited to these computer-intensive sectors; the phenomenon is observed in virtually every sector of the U.S. economy.

Several independent company-level studies from the United Kingdom, Japan, and France also suggest that an important part of the computer revolution is substitution of inputs. The French study, for example, found a strong positive relationship between the proportion of computer-users and output per hour. A survey of Japanese manufacturing and distribution companies finds that information networks complement white-collar jobs but substitute for blue-collar jobs.

Rather than looking at empirical relationships between computers, productivity, and employment patterns, the Australian Bureau of Statistics used a more subjective, although still informative, approach. In a 1991 survey of manufacturing companies, nearly 50 percent rated lower labor costs as a “very important” reason for introducing new technology. A 1994 follow-up study found that almost 25 percent of the companies cited reducing labor costs as a “very significant” or “crucial” objective in technological innovation. These survey results offer still more evidence that companies expect high-tech capital to substitute for other production inputs.

Measuring productivity

These results suggest that a large part of the computer revolution entails substitution of one production input (computers) for others (labor and other types of capital). But is this just wheel-spinning? The answer depends on how productivity is defined and measured and what one means by wheel-spinning. Economists use two distinct concepts of productivity: average labor productivity (ALP) and total factor productivity (TFP). Although these concepts are related, they cannot be used interchangeably, and TFP is the productivity measure most favored by economists when analyzing the production process.

ALP is defined simply as output per hour worked. A major advantage of this measure is computational; both output and labor input statistics are relatively easy to obtain. Since the 1930s, however, economists have recognized that labor is only one of many production inputs and that labor’s access to other inputs, especially physical capital, is a key determinant of ALP. That is, when their labor is augmented by more machines and better equipment, workers can produce more. This increase in output need not reflect harder work or improved efficiency but is simply due to increases in the complementary inputs available to the labor force.

This key insight led to the concept of TFP, defined as output per unit of total inputs. Rather than calculating output per unit of labor as in ALP, TFP compares output to a composite index of all inputs (labor, physical capital, land, energy, intermediate materials, and purchased services, augmented with quality improvements), where different inputs are weighted by their relative cost shares.

Increased TFP has often been interpreted as technological progress, but it more accurately reflects all factors that generate additional output from the same inputs. New technology is a key source of TFP growth, but so are economies of scale, managerial skill, and changes in the organization of production. Furthermore, technological progress can be embodied, at least in part, in new investment.

ALP and TFP are fundamentally different concepts although TFP is an important determinant of ALP. ALP grows–that is, each worker can produce more–if workers have more or better machinery to work with (capital deepening), if workers become more skilled (labor quality), or if the entire production process improves (TFP growth).

Despite the connection between these two concepts, the trend toward greater use of computers implies different things for each measure of productivity. If investment in computer capital is primarily for input substitution, then ALP should increase as labor is supported by more capital. TFP, however, will not be affected directly; it will increase only if computers increase output more than through their direct impact as a capital input. It is this more-than-proportional increase that many analysts have in mind when they argue that increased investment in computers should result in higher productivity.

It is easier to define these productivity statistics than to measure and apply them. There is a growing consensus among economists that both output growth and productivity growth are poorly measured, especially in the fast-growing service sectors with a high concentration of computers. This measurement problem is part of a more fundamental issue concerning output growth and quality change. Most economists agree that quality improvements are an important form of output growth that need to be measured. The U.S. Bureau of Economic Analysis (BEA) officially measures the enormous quality change in computer equipment as output growth. Based on joint work with IBM, BEA now uses sophisticated statistical techniques to create “constant-quality price indexes” that track the price of relevant characteristics (such as processor speed and memory). These price indexes allow BEA to measure the production of real computing power and count that as output growth. Thus, the quality-adjusted price of computer equipment has fallen at extraordinary rates, while real computer investment has rapidly grown as a share of total investment in business equipment.

For other sectors of the economy, however, output is harder to define and measure. In the FIRE sector, for example, BEA extrapolates official output growth for banks based on employment growth so that labor productivity is constant by definition. Yet most would argue that innovations such as ATMs and online banking have increased the quality of bank services. Because difficulties of this type are concentrated in all service sectors, output and productivity estimates in those sectors must be interpreted with caution.

Labor productivity growth

In the early 1970s, the industrial world experienced a major growth slowdown in terms of aggregate output, ALP, and TFP. Economists have offered many possible reasons for this slowdown–the breakdown of the Bretton Woods currency arrangements, the energy crisis, an increase in regulation, a return to normalcy after the unique period of the 1950s and 1960s, and an increase in the share of unmeasured output–but a clear consensus has not yet emerged. Because the computer revolution began in the midst of this global slowdown, untangling the relationship between computers and productivity growth is particularly difficult. For example, does the drop in U.S. ALP growth from 3.4 percent (1948 to 1973) to 1.2 percent (1973 to 1996) mean that computers lowered ALP growth? Or would the slowdown have been much worse had the computer revolution never taken place? Without the proper counterfactual comparison–what productivity growth would have been without computers–it is difficult to identify the true impact of computers.

Our approach to that problem is to compare ALP growth in the computer-using sectors with the non-users in the 34-sector database before and after the slowdown period. Chart 1 compares growth rates of average labor productivity for five computer-using sectors in manufacturing and 16 other manufacturing sectors for 1960 to 1996. For the early period of 1960 to 1973, labor productivity growth was roughly the same for the two groups-2.8 percent per year for computer-using sectors and 3.1 percent per year for non-computer-using sectors. Both groups then suffered during the much-publicized productivity slowdown in the 1970s as ALP growth rates fell to about 1.5 percent per year during the period 1973 to 1979.

As the computer continued to evolve and proliferate in the 1980s, businesses adapted and their production processes changed. Personal computers-first classified as a separate investment good in 1982-became the dominant form of computer investment, and ALP growth accelerated in the computer-using sectors in manufacturing. Between 1990 and 1996, these sectors posted strong ALP growth of 5.7 percent per year, whereas other manufacturing sectors managed only 2.6 percent per year. Because ALP growth for computer-using sectors prior to the 1970s was lower than in other manufacturing sectors, this analysis strongly suggests that computers are having an important impact on labor productivity growth in U.S. manufacturing.

The same comparison for nonmanufacturing sectors yields quite different results, with no obvious ALP gains for computer-using sectors outside of manufacturing (Chart 2). Rather, the 3 computer-using sectors and the 10 non-computer-using sectors show healthy productivity growth prior to 1973, but sluggish productivity growth thereafter: Labor productivity grew only 0.9 percent per year for computer-using sectors in nonmanufacturing between 1990 and 1996, and 0.8 percent for other nonmanufacturing sectors.

The sharp contrast in productivity growth in computer-using sectors in manufacturing and in services highlights the difficulties associated with productivity measurement. Economists have long argued that output and productivity growth are understated in the service sectors due to the intangible nature of services, unmeasured quality change, and poor data. These results support that conjecture and further imply that measurement problems are becoming more severe in the computer-intensive service sectors. This suggests that much of what computers do in the service sectors is not being captured in the official productivity numbers.

Although measurement errors probably understate output and productivity growth in the computer-intensive service sectors, this does not change the finding of significant input substitution. In the trade and FIRE sectors, for example, the growth of labor slowed while computer inputs increased more than 20 percent per year from 1973 to 1991. Because capital and labor inputs are measured independently of service-sector output, this type of primary input substitution is not subject to the same downward bias as is TFP growth. Whatever the true rates of output and TFP growth, these service sectors are clearly substituting cheap computers for more expensive inputs.

Variation in growth

Estimates of TFP growth for each of the 34 sectors demonstrate no relationship between it and the growth of computer use. TFP grew in some sectors, fell in others, and stayed about the same in others, but there was no obvious pattern relating TFP growth to computer use. Nor was there any relationship evident for just the eight computer-using sectors (see Table 1). These findings suggest that, in contrast to increases in ALP, there have been few TFP gains from the widespread adoption of computers.

Many consider this disappointing. Learning lags, adjustment costs, and measurement error have been suggested as reasons for a slow impact of computers on TFP growth. It is important to remember, however, that this finding is entirely consistent with the evidence on input substitution. If computer users are simply substituting one production input for another, then this reflects capital deepening, not TFP growth. Recall that TFP grows only if workers produce more output from the same inputs. If investment in new computers allows the production of entirely new types of output (for example, complex derivatives in the financial services industry), the new products are directly attributable to the new computer inputs, not to TFP growth.

This conclusion partly reflects BEA’s explicit adjustment for the improved quality of computers and other inputs, but most economists agree that quality change is an important component of capital accumulation. That is, when computer investment is deflated with BEA’s official constant-quality price deflator, the enormous improvement in the performance of computers is then folded into the estimates of computer capital. Quality improvements are effectively measured as more capital, so capital becomes a more important source of growth, and the TFP residual accounts for a smaller proportion of output growth.

So far this analysis has focused on the role of computers as an input to the production process. But computers are also an output; companies produce computers and sell them as investment and intermediate goods to other sectors and as consumption and export goods. Because the observed input substitution in computer-using sectors is driven by rapid price declines for computer equipment, it is important to examine the production of computers themselves and investigate the source of that price decline.

The data show that TFP is the primary source of growth for the computer-producing sector and a major contributor to the modest TFP revival in the U.S. economy, particularly in manufacturing. From 1979 to 1991, virtually the entire growth in output in the computer-producing sector is attributable to TFP growth; that is, output grew much faster than inputs and caused a large TFP residual. In fact, output grew 2.3 percent per year even though labor, energy, and material inputs actually declined. The computer-producing sector is itself also an important user of computers; nearly 40 percent of the growth in output attributable to capital services comes from computer capital over this same period.

Rapid growth in TFP in the computer-producing sector contrasts with sluggish TFP growth in the entire U.S. private-business economy, which fell from more than 1.6 percent per year before 1973 to -0.3 percent for the period from 1973 to 1979. Even the computer-producing sector showed negative TFP growth in that period. After 1979, however, the story is very different. While annual TFP growth for the 35 sectors rebounded mildly to 0.3 percent per year, TFP growth in the computer-producing sector jumped to 2.2 percent for 1979 to 1991.

The aggregate economy consists, by definition, of its sector components. How much of economy-wide TFP growth reflects TFP growth from the computer-producing sector? In the 1980s, it was as much as one-third of total TFP growth. In the 1990s, TFP growth in the sector remained high, but because there were increases in TFP growth in other manufacturing sectors, it accounted for less of the total, about 20 percent between 1991 and 1994.

Recent estimates of TFP growth for manufacturing industries confirm these trends. Of the 20 manufacturing sectors analyzed by the Bureau of Labor Statistics (BLS), “industrial and commercial machinery,” where computers are produced, showed the most rapid annual TFP growth: 3.4 percent per year from 1990 to 1993. Total manufacturing, on the other hand, showed TFP growth of just 1.2 percent for the same period. Although these estimates are not directly comparable to those derived from the 35-sector database, they confirm the importance of the computer-producing sector in economy-wide TFP growth.

Given the substantial work by BEA on computer prices, real output growth in the computer-producing sector is probably among the best measured. Thus, the estimates of rapid output and TFP growth in the computer-producing sector appear sound. Furthermore, these results support the conventional wisdom that computers are more powerful, affordable, and widespread than ever. Recent work at BEA, however, suggests that constant-quality price indexes should also be used for other production inputs. If the quality of these other inputs such as semiconductors is improving rapidly but costing less, TFP growth will be overstated in the sectors that use these inputs and understated in the sector that produces them. This kind of mismeasurement primarily affects the allocation of TFP among sectors, not the economy-wide total TFP.

The substitution of computers for other, more expensive, inputs goes a long way toward explaining the computer paradox. The impact of computers is observable not in TFP, as many observers perhaps expected, but in the accumulated stock of computer capital. This explains why, despite the pickup in labor productivity growth after 1979, economy-wide TFP growth has remained low. For most sectors, computers are a measured input that contributes directly to economic growth. Rapid TFP growth occurs primarily in the computer-producing sector, where faster, better computers are continually offered at ever-lower prices. This reflects fundamental technological advances that are driving the computer revolution and makes a substantial contribution to economy-wide TFP growth.

Moreover, there is little indication that this growth will slow. BLS, for example, projects that labor productivity growth in the computer and office equipment industry will accelerate to 9.9 percent per year through 2005. If these projections are correct and companies continue to substitute relatively inexpensive computers for costlier older models, computers will become an increasingly important source of economic growth.

Saving Medicare

For the past generation, ensuring access to health care and financial security for older Americans and their families under the Medicare program has been an important social commitment. The elderly are healthier now than before Medicare was enacted and they and their families are protected from financial ruin caused by high medical expenses. Medicare has also helped to narrow the gap in health status between the most and least well-off, and it has contributed to medical advances by supporting research and innovation that have led to increased health, vitality, and longevity among the elderly.

Now this remarkable social commitment to older persons may be weakening, largely because of Medicare’s faltering finances. As everyone knows, Medicare’s costs are rising rapidly, and members of the huge baby boom generation will soon begin to retire. Although the boomers are expected to be healthier in their older years than their parents and grandparents were, the surge in the number of people over 65 will probably mean much more chronic illness and disability in the population as a whole and will raise the demand for health services and long-term care. Thus, ensuring the continued commitment of health protection to the elderly will require bolstering the Medicare program.

Medicare now pays only about 45 percent of the total health care costs of older Americans.

Medicare spending topped $200 billion in 1996. Despite an overall slackening of medical price inflation over the past few years, Medicare spending has continued to outpace private health insurance spending. The possibility that Medicare outlays will continue to grow faster than overall federal spending is a major concern. Medicare’s share of the federal budget is expected to increase from under 12 percent in 1997 to 16 percent by 2008.

The solvency of the Medicare Part A trust fund depends on a simple relationship: income into the fund must exceed outlays. In 1995, for the first time since Medicare began, outlays exceeded income. In 1997, Part A expenditures were $139.5 billion, whereas income was $130.2 billion. The difference was made up from a reserve fund, which had $115.6 billion left in it at the end of 1997. By 2008, the reserves are expected to be depleted.

Because the first baby boomers become eligible for Medicare in 2011, action is needed now to solve the program’s long-term financial problems. The combination of increased federal outlays for millions of aging Americans and projected smaller worker-to-retiree ratios will most assuredly lead to the Medicare program’s inability to meet the health care needs of next century’s elderly population. This scenario is unavoidable without substantial reductions in the growth of Medicare spending, increased taxes to cover program costs, or both.

A 17-member national Bipartisan Commission on the Future of Medicare is now considering changes needed to shore up the program. The commission’s mandate is to review Medicare’s long-term financial condition, identify problems that threaten the trust fund’s financial integrity, analyze potential solutions, and recommend program changes. A final report is due to Congress by March 1, 1999. Two approaches the commission is considering are raising the Medicare eligibility age from age 65 to 67 and requiring beneficiaries to pay more of the program’s costs. Congress has already demonstrated its willingness to restructure the program. In 1997, with little public input or comment, the Senate voted to gradually raise the Medicare eligibility age to 67 years, an action that would have ended the entitlement to Medicare for persons age 65 and 66. Fortunately, the House refused to support this measure. But the Senate vote underscores how great the stakes are in the current debate.

Fixing Medicare’s fiscal problems will require difficult choices. Whatever changes are made will ultimately affect every American directly or indirectly. To assess the proposals now on the table, it is essential that people of all ages have a basic understanding of the Medicare program: how it is financed, what services it pays for, and its current limitations.

Medicare basics

The Medicare program, enacted in 1965, provides health insurance for persons 65 and older if they or their spouses are eligible for Social Security or Railroad Retirement benefits. Also eligible are disabled workers who have received Social Security payments for 24 months; persons with end-stage renal disease; and dependent children of contributors who have become disabled, retired, or died. When Medicare was enacted, only 44 percent of older Americans had any hospital insurance. Today, Medicare provides coverage to 98 percent of those 65 and older (more than 33 million people) as well as 5 million disabled persons.

Although Medicare is referred to as a single program, it has two distinct parts-one for hospital insurance and one for physician services- with separate sources of financing. Enrollment in Part A, the Medicare Hospital Insurance Trust Fund, is mandatory for those eligible and requires no premium payments from eligible persons. Persons not eligible can purchase Part A coverage; the cost of the premium depends on how much covered employment a person has. Persons with no covered employment must pay the full actuarial cost, which was $3,700 per year in 1997. Enrollment in the Supplementary Medical Insurance Trust (Part B) is voluntary and is limited to those who are entitled to Part A. Nearly all persons over 65 who are eligible for Part A also purchase Part B coverage. The current premium is $43.80 per month. (Those who choose not to enroll often have generous coverage as a retirement benefit.) Nearly 80 percent of Medicare beneficiaries also have supplemental insurance, either “Medigap” or a retirement plan.

About 90 percent of Part A income comes from a 2.9 percent payroll tax-half paid by employees and half by employers. Self-employed individuals pay the full 2.9 percent. The rest comes from interest earnings, income from taxation of some Social Security benefits, and premiums from voluntary enrollees. Part B of Medicare is financed primarily through beneficiary premiums (about 25 percent) and general revenues (about 75 percent). Part B does not face the prospect of insolvency because general revenues always pay any program expenditures not covered by premiums.

Should beneficiaries pay more?

One proposal to reduce federal outlays for Medicare would require beneficiaries to pay a larger share of the program’s costs. But the proposal fails to consider a compelling fact: Medicare currently pays only about 45 percent of the total health care costs of older Americans. Contrary to what many Americans assume, Medicare benefits are less generous than typical employer-provided health policies. Although Medicare covers most acute health care services, it does not cover significant items such as many diagnostic tests, eye examinations, eyeglasses, and hearing aids. Most important, it does not cover the cost of prescription drugs.

Many chronic conditions are now successfully controlled with medications. In 1998, Medicare beneficiaries with prescription drug expenses will spend an average of $500 per person on medications. Beneficiaries who need to take multiple drugs on a daily basis can easily pay twice that amount. Of the 10 standard Medigap policies, only the three most expensive ones include prescription drug coverage, and some of these are not sold to people with preexisting health conditions. In short, most seniors are simply out of luck when it comes to insurance protection for prescription drug expenses.

Another gap in protection is for long-term care, which includes health care, personal care (such as assistance with bathing), and social and other supportive services needed during a prolonged period by people who cannot care for themselves because of a chronic disease or condition. Long-term care services may be provided in homes, community settings, nursing homes, and other institutions. Medicare’s coverage of nursing home care and home health care is generally limited to care after an acute episode, and Medigap policies don’t cover long-term care. Private long-term care insurance is available, but very few people over 65 purchase it because of high premium costs. The average cost of a moderately priced policy at age 65 is about $1,800 per year; at age 79, about $4,500.

The federal government is significantly overpaying managed care companies for the care of Medicare beneficiaries.

Part A and B services require considerable cost sharing in the form of deductibles and copayments, and unlike most private health insurance plans, Medicare does not have a catastrophic coverage cap that limits annual financial liability. Since the 1980s, several legislative changes have increased the amount of program costs for which Medicare beneficiaries are responsible through premium increases, higher deductibles, and increased copayments. In 1997, Medicare beneficiaries spent on average about $2,149 or nearly 20 percent of their income on out-of-pocket costs for acute health care. These costs include premiums for Part B and Medigap insurance, physician copayments, prescription drugs, dental services, and other uncovered expenses. These estimates do not include payments for home health care services or skilled nursing facility care not covered by Medicare. In 1995, Medicare beneficiaries who used skilled nursing facilities (less than three percent of all beneficiaries) had an average length of stay of about 40 days, which would have required a copayment of about $1,900. Some medicap policies will pay this. Compared to persons under 65 who have insurance, Medicare beneficiaries pay a significantly larger share of their health care costs through out-of-pocket payments.

Medicare’s home health benefit is a particular focus of proposals to increase cost sharing. The home health benefit covers intermittent skilled nursing visits and part-time intermittent home health aide visits to assist people with tasks such as bathing and dressing. Home health is the only Medicare benefit that does not require cost sharing. A congressional proposal last year would have required a $5 copayment per visit for many Medicare beneficiaries who receive home health care. Although this may appear to be a trivial amount, a closer look at the recipients of home care and the amount of services they receive indicates that the copayment could indeed prove to be unaffordable for a large proportion of home health beneficiaries. About a third of these beneficiaries have long-term needs and, compared with other home health users, are more impaired and poorer than the average Medicare beneficiary. This group of long-term home health users is also more likely to have incomes under $15,000 a year. Because they receive on average more than 80 home health visits a year, a $5 copayment would increase their costs by more than $400 a year. There are serious equity concerns about requiring the poorest and most infirm Medicare beneficiaries to pay such an amount.

Individually purchased Medigap insurance is often medically underwritten and is expensive. Annual premiums can range from $420 to more than $4,800, depending on the benefits offered and the age of the beneficiary. Annual premiums for a typical Medigap plan reached $1,300 in 1997. Despite their high cost, however, many standard Medigap policies offer little protection. Only 14 percent of beneficiaries have policies that cover prescription drugs. Rising Medigap premium costs, fueled in part by the growth of hospital outpatient services that require a 20 percent copayment, have become an issue of growing concern to many older Americans.

Employer-provided retiree health coverage is another important source of financial security for some older Americans. Retirees, especially those who have the least income or the poorest health, value this extra security as well as the comprehensiveness of the insurance as compared to Medigap plans. However, the number of firms that offer health benefits to Medicare-eligible retirees is declining. Fewer than one-third of all large firms now award retirement health benefits. Firms are also limiting their financial obligations to retirees by restricting health benefit options, tightening eligibility requirements (such as required length of employment), and increasing individual cost sharing. And more companies are replacing insurance with defined contribution plans in which retirees get a fixed dollar amount to purchase health benefits. Thus, the value of the employer contribution can diminish over time.

The relationship between Medicare and the supplemental insurance market is complex and will continue to change as the market changes in many areas of the country and as the nature of employer-sponsored retiree coverage evolves. These trends and their interrelationships must be thoroughly assessed when considering possible structural alterations in Medicare. It cannot be assumed that if Medicare raises deductibles and copayments, Medigap policies will cover the increased cost sharing, or if they do, that they will remain affordable.

Medicare beneficiaries, whatever their income, pay the same monthly Part B premiums. Some people believe that requiring people with higher income to pay a larger amount would be more equitable. In 1997, Congress considered but did not enact a proposal that would have made the Part B premium income-related: Medicare beneficiaries with higher incomes would have been required to pay more, and rates for those with low incomes would have remained unchanged. Concerns were raised that any approach to make people with higher incomes pay more would undermine the social insurance basis of the program and encourage higher-income elderly persons to opt out. However, the primary reason Congress dropped the proposal was because administering it would have been complex and costly and because it would not have produced significant new revenue. This is not surprising given that 50 percent of persons 65 and older have incomes less than $15,000 per year.

Age of eligibility

The Bipartisan Medicare Commission is specifically charged to make recommendations on raising the age-based eligibility for Medicare. Proponents argue that because the eligibility age for Social Security will gradually rise to age 67 in 2025, Medicare’s eligibility age should rise as well. This reasoning, however, is based on a misunderstanding of how Social Security works. It isn’t the eligibility age for Social Security that is being raised but rather the age at which a person is eligible to receive full benefits from the program. All persons eligible for Social Security will still have the option of retiring at age 62 with reduced benefits, and many do.

If Medicare eligibility were to truly parallel Social Security eligibility, then Medicare would offer reduced benefits to early retirees who would be required to pay a greater portion of the actuarial value of the Part B premium than current beneficiaries pay. Such a policy was recently proposed by the Clinton administration as a practical way of providing health insurance to people aged 55 to 64 who need to take early retirement. The proposal was criticized by those who questioned its budget neutrality and by those concerned about the affordability of premiums for low-income Medicare beneficiaries, particularly as they reach advanced ages.

Although reducing the number of people who are eligible for Medicare would appear to be a major source of program savings, closer analysis indicates that it is not. A recent study found that even if the eligibility age were raised immediately, no more than a year would be added to the life of the Part A trust fund, because a significant number of people aged 65 and 66 would still qualify for Medicare on the basis of disability. Also, per capita Medicare expenditures for persons 65 and 66 years old are less than two-thirds of the cost for the average beneficiary. It is estimated that raising the eligibility age to 67 would reduce total annual program costs by only 6.2 percent. For this small saving, the number of uninsured 65- and 66-year-olds could reach 1.75 million.

We have to acknowledge the possibility that society will choose to pay more-in short, higher taxes-for Medicare’s continued protection

By any measure, the incidence of health problems increases with advanced age. The earlier the age of retirement, the greater the frequency that poor health or disability are cited as the primary reason for retiring. Health insurance and the access it provides to medical care become more important as people grow older because of the increasing risk of having major and multiple problems. In the current medically underwritten health insurance market, people who are older and who do not have employer-provided insurance are not likely to be covered. Either they are considered medically uninsurable because of preexisting conditions, or they are charged so much because of a preexisting condition that they can’t afford the policy. On the basis of data from the National Medical Expenditure Survey, a rough estimate of the cost of a private, individual insurance policy covering 80 percent of expenses for a 65-year-old person is about $6,000 per year.

It is tempting to assume that if the age for Medicare eligibility were increased, employers who provide health benefits to retirees would simply extend their coverage to fill the gap. In reality, the opposite is likely to happen. Economists at Rand found that between 1987 and 1992, the percentage of employers offering retiree health benefits to persons under 65 decreased from 64 percent to 52 percent. This is not surprising, because retiree coverage of those 65 and older is supplemental to Medicare and therefore costs much less than the coverage provided to those under 65. A recent study found that the average annual employer cost for employers offering coverage per early retiree under 65 was $4,224; for retirees 65 and older, it was $1,663. For at least a decade, U.S. employers have been cutting back on health insurance coverage for workers and retirees, and this trend is expected to continue indefinitely.

Recent court rulings have further contributed to the insecurity of retiree health benefits. In separate 1994 rulings, two federal appeals courts decided that under the Employment Retirement Income Security Act (ERISA), employers can modify or terminate welfare benefit plans, notwithstanding promises of lifetime benefits that were given to employees. In effect, the courts ruled that “informal” documents distributed to employees promising lifetime benefits are basically irrelevant if the actual contract specifies different provisions. In 1995, the U.S. Supreme Court refused to hear an appeal of an Eighth Circuit Court decision that allowed a company to modify its retirees’ health benefits. Also in 1995, the Supreme Court found that employers are generally free under ERISA, for any reason and at any time, to adopt, modify, or terminate welfare and health benefit plans.

Taken together, the high cost of health insurance, the decreasing number of companies offering retiree health benefits, and the unfavorable court rulings make it extremely unlikely that employers are going to step in and provide health care coverage for people aged 65 and 66. Thus, the Medicare Commission should not consider an increase in Medicare’s eligibility age without fully recognizing the limitations of the private health insurance market and the certain increase in the number of uninsured persons. Any reasonable proposal to increase the Medicare eligibility age must include provisions for people aged 65 and 66 to buy into the Medicare program. But this is not a straightforward solution, because it raises questions about the affordability of the actuarially based Medicare premium (about $3,000 in 1997) and the resulting need for subsidies for low-income persons.

Is managed care the savior?

One of the most significant recent developments within the Medicare program is the introduction of managed care. More than 5 million Medicare beneficiaries are enrolled in managed care plans, and the Congressional Budget Office has projected that within a decade, nearly 40 percent of beneficiaries will be enrolled in managed care. Some think that managed care could be the salvation of the Medicare program, because they believe that it can simultaneously reduce overall Medicare expenditure growth and provide better benefits and lower cost sharing for Medicare enrollees. There are valid reasons for thinking that this scenario is in fact too good to be true.

Managed care plans do tend to provide more generous benefits and require less cost sharing than the traditional Medicare program. By accepting limits on their choice of doctors and hospitals, Medicare beneficiaries may secure valued benefits such as prescription drugs, often without additional charges. Managed care plans receive a monthly payment per enrollee from the federal government that is about 95 percent of the average cost of treating Medicare patients in the fee-for-service sector. But because plans tend to attract healthier individuals, whose cost is less than this per capita rate, Medicare pays more than it otherwise would have for these beneficiaries under fee-for-service. One study by the Physician Payment Review Commission found that the cost of treating new Medicare managed care enrollees was only 65 percent of the cost of treating beneficiaries under the fee-for-service system. This overpayment problem is exacerbated by the fact that people can switch from managed care plans to fee-for-service once they become seriously ill, and managed care plans clearly have a strong financial incentive to adopt practices that will encourage them to do so.

How to deal with the distortion of financial incentives caused by people with high medical expenses remains a difficult problem. People with major chronic illnesses who need more extensive medical services than the average older person are not considered attractive enrollees because they pose a major financial liability for health plans. If statistical adjustments in Medicare plan payments based on the health status and diagnosis of enrollees could be developed, health plans would be less likely to avoid enrolling this population and less likely to skimp on their care. However, the development of predictive risk models powerful enough to offset risk selection practices by plans is still many years away.

Thus, achieving Medicare savings from an increase in managed care enrollments is not ensured. Health maintenance organizations (HMOs) may choose to enter the Medicare market if they believe that the per capita payments, relative to costs, will yield financial rewards. However, there may prove to be a fine line between setting federal payments high enough to attract HMOs into the Medicare program and setting plan payments at a level that yields any real program savings. Federal budget officials will need to keep their fingers crossed and hope that Medicare savings from managed care will grow over time as plans gain experience in controlling utilization and reducing lengths of hospital stays among the elderly.

One final proposal to revamp Medicare must be examined. For nearly 20 years, one school of economists has argued that we need to fundamentally alter the way we think about Medicare benefits. Their view is that instead of an entitlement to benefits, a Medicare beneficiary should receive a voucher to be used to purchase private health insurance. Supporters argue that this approach would offer beneficiaries the ability to select a health plan that best meets their health and financial needs, much as federal employees do through the Federal Employees Health Benefits Program. It would also give Congress better control of Medicare spending. But theoretical advantages are tempered by practical questions: how to prevent insurance companies from engaging in risk selection; how to prevent diminished quality of care, particularly in lower-cost plans; and most important, how to ensure that the amount of the voucher is sufficient to purchase at least the same amount of benefits and financial protection that Medicare currently provides. This last concern is particularly important for beneficiaries who have the least income and the greatest medical needs. When considering this approach, the Medicare Commission will need to carefully balance hypothesized cost savings with other societal goals such as affordability, access, and quality of care.

Will society choose to pay more?

Medicare’s structure and financing must evolve if it is to meet the health care needs of the retired baby boom cohort. The changes that will be required will entail difficult policy choices. Raising Medicare’s eligibility age to 67 will cut some costs, but at what price? Are the savings worth the cost of creating a new group of uninsured older Americans? Imposing additional cost-sharing requirements on Medicare beneficiaries would in theory provide a powerful incentive to limit their use of health services. In reality, the availability of supplemental insurance decreases this incentive, and there is evidence that increased cost sharing can decrease the utilization of medically necessary care. This would be a particularly negative and potentially costly outcome for older persons with chronic conditions who require periodic physician and outpatient care. In general, increasing cost sharing is a regressive approach because it shifts costs to those who are sickest and it imposes a greater burden on the poor

We must continue to reduce the rate of increase in Medicare’s costs. We also have to acknowledge the possibility that society will choose to pay more-in short, higher taxes-for Medicare’s continued protection. This option is often dismissed as not politically feasible, but when people understand the personal costs of the alternatives, they may change their minds.

Medicare has strong and enduring public support because it is a universal program. Although Medicare must be put on a sound financial basis, its universal nature must not be undermined, and reform must not come at the high price of increased numbers of uninsured, increased financial insecurity, and reduced care for those who need it the most. President Johnson perfectly captured the larger purpose of the Medicare program when he said “with the passage of this Act, the threat of financial doom is lifted from senior citizens and also from the sons and daughters who might otherwise be burdened with the responsibility for their parents’ care. [Medicare] will take its place beside Social Security and together they will form the twin pillars of protection upon which all our people can safely build their lives.” We need to ensure that this protection continues into the 21st century.

No Productivity Boom for Workers

America’s love affair with the new technologies of the Information Age has never been more intense, but nagging questions remain about whether this passion is delivering on its promise to accelerate growth in productivity. Corporate spending on information technology hardware is now running in excess of $220 billion per year, easily the largest line item in business capital spending budgets. And that’s just the tip of the cost iceberg, which has been estimated at three to four times that amount if the figure includes software, support staff, networking, and R&D–to say nothing of the unrelenting requirements of an increasingly short product-replacement cycle.

Many believe there are signs that the long-awaited payback from this technology binge must now be at hand. They look no further than the economic miracle of 1997, a year of surging growth without inflation. How could the U.S. economy have entered the fabled land of this “new paradigm” were it not for a technology-led renaissance in productivity?

The wisdom of corporate America’s enormous bet on information technology has never been tested by a cyclical downturn in the real economy.

The technology-related miracles of 1997 go well beyond the seeming disappearance of inflation. The explosion of the Internet, the related birth of electronic commerce, and the advent of fully networked global business are widely viewed as mere hints of the raw power of America’s emerging technology-led recovery. The most comprehensive statement of this belief was unveiled in a legendary article in Wired by Peter Schwartz and Peter Leyden. It argues that we “are riding the early waves of a 25-year run of a greatly expanding economy.” It’s a tale that promises something for everyone, including the disappearance of poverty and geopolitical tensions. But in the end, it’s all about the miracles of a technology-led resurgence in productivity growth. This futuristic saga has become the manifesto of the digital age.

Against this backdrop, the “technology paradox”–the belief that the paybacks from new information technologies are vastly overblown–seems hopelessly outdated or just plain wrong. Could it be that the hype of the Information Age is supported by economic data. Ultimately, the debate boils down to productivity, which is the benchmark of any economy’s ability to create wealth, sustain competitiveness, and generate improved standards of living. Have the new technologies and their associated novel applications now reached a critical mass that is introducing a new era of improved and sustained productivity growth that benefits the nation as a whole? Or does the boom of the 1990s have more to do with an entirely different force: namely, the tenacious corporate cost-cutting that has benefited a surprisingly small proportion of the actors in the U.S. economy?

A glacial process

For starters, we should remember that shifts in national productivity trends are typically slow to emerge. That shouldn’t be surprising; aggregate productivity growth represents the synergy between labor and capital, bringing into play not only the new technologies that are embedded in a nation’s capital stock but also the skills of workers in using them to boost their productivity.

The paradox begins on the capital stock side of the productivity equation, long viewed as a key driver of any nation’s aggregate productivity potential. Ironically, although surging demands for new information technologies have boosted overall capital spending growth to an 8.5 percent average annual pace over the period from 1993 to 1996 (a four-year surge unmatched since the mid-1960s), there has been no concomitant follow-through in the rate of expansion of the nation’s capital stock. Indeed, the growth of the total stock of business’s capital over the 1990-1996 interval has averaged only 2 percent, the slowest pace of capital accumulation in the post-World War II era and only half the 4 percent average gains recorded in the heyday of the productivity-led recovery in the 1960s.

There is no inherent inconsistency between information technology’s large capital-spending share and small capital-stock share. The disparity reflects a very short product-replacement cycle and the related implication that about 60 percent of annual corporate information technology budgets goes toward replacement of outdated equipment and increasingly frequent product upgrades. In other words, there is little evidence of a resurgence in overall capital accumulation that would normally be associated with an acceleration in productivity growth.

At the same time, the news on the human capital front is hardly encouraging. In particular, there is little evidence that the educational attainment of U.S. workers has moved to a higher level, which should also be a feature of an economy that is moving to higher productivity growth. The nationwide aptitude test results of graduating high school seniors remain well below the levels of the 1960s. Companies may be working smarter, but there are few signs that this result can be traced to the new brilliance of well-educated and increasingly talented workers.

Productivity is all about delivering more output per unit of work time. It is not about putting in more time on the job.

It is important to understand the historical record of shifts in aggregate productivity growth trends. Acceleration was slow to emerge in the 1960s, with the five-year trend moving from 1.75 percent in the early part of the decade to 2.25 percent at its end. Similarly, the great slowdown that began in the late 1970s saw a downshift in productivity growth, from 2 percent to 1 percent, unfold over 5 to 10 years. Even the 1960s changes fell well short of the heroic claims of the New Paradigmers, who steadfastly insist that U.S. productivity growth has gone from 1 percent in the 1980s to 3 to 4 percent in the latter half of the 1990s. Such an explosive acceleration in national productivity growth would outstrip any historical experience (see Figure 1).

But perhaps it is most relevant to examine that slice of activity where the new synergies are presumed to be occurring: the white-collar services sector. According to U.S. Department of Commerce statistics, fully 82 percent of the nation’s total stock of information technology is installed there, in retailers, wholesalers, telecommunications, transportation, financial services, and a wide array of other business and personal service establishments. Not by coincidence, around 85 percent of the U.S. white-collar work force is employed in the same services sector. Thus, the U.S. productivity debate is all about the synergy, or lack thereof, between information technology and white-collar workers.

Where the rubber meets the road

A look at the shifting mix of U.S. white-collar employment provides some preliminary hints about what lies at the heart of the U.S. productivity puzzle. In recent years, employment growth has slowed most sharply in the back-office (that is, processing) categories of information-support workers who make up 29 percent of the service sector’s white-collar work force. In contrast, job creation has remained relatively vigorous in the so-called knowledge-worker categories–the managers, executives, professionals, and sales workers that account for 71 percent of U.S. white-collar employment.

Increasing the productivity of knowledge workers is going to be far more difficult to achieve than previous productivity breakthroughs for blue-collar and farm workers.

The dichotomy between job compression in low value-added support functions and job growth in high value-added knowledge-worker categories is an unmistakable and important byproduct of the Information Age. Capital-labor substitution works at the low end of the value chain, as evidenced by an unrelenting wave of back-office consolidation, but it is not a viable strategy at the high end of the value chain, where labor input tends to be cerebral and much more difficult to replace with a machine. Consequently, barring near-miraculous breakthroughs in artificial intelligence or biogenetic reprogramming of the human brain, productivity breakthroughs in knowledge-based applications should be inherently slow to occur in the labor-intensive white-collar service industry.

Debunking the measurement critique

There are many, of course, who have long maintained that the U.S. productivity puzzle is a statistical illusion. Usually this argument rests on the presumed understatement of service sector output (the numerator in the productivity equation). This understatement reflects Consumer Price Index (CPI) biases that deflate a current-dollar measure of output with what is believed to be an overstated price level. But there’s also a sense that statisticians are simply unable to capture that amorphous construct, the service sector “product.” That may well be the case, although I note that last summer’s multiyear (benchmark) revisions to the Gross Domestic Product (GDP) accounts, widely expected to uncover a chunk of the “missing” output long hinted at by the income side of the national accounts, left average GDP (and productivity) growth essentially unaltered over the past four years.

I worry more about accuracy in measuring the denominator in the productivity equation: hours worked. Existing labor-market surveys do a reasonably good job of measuring the number of employed workers in the United States, but I do not believe the same can be said for the work schedule of the typical employee. I maintain that working time has lengthened significantly over the past decade and could well reduce the accuracy of the labor input number used to derive national productivity.

Ironically, this lengthening of work schedules appears to be closely tied to an increase in work away from the office that is being facilitated by the new portable technologies of the Information Age: laptops, cellular telephones, fax machines, and beepers. Many white-collar workers are now on the job much longer than the official data suggest. Productivity is all about delivering more output per unit of work time. It is not about putting in more (unmeasured ) time on the job. If work time is underreported, then productivity will be overstated no matter what problems exist in the output measurement.

According to a recent Harris Poll, the median number of hours worked per week in the United States rose from 40.6 in 1973 to 50.8 in 1997. This stands in sharp contrast to the 35-hour weekly work schedule assumed in the government’s official estimates of productivity. U.S. workers obviously feel they are working considerably longer hours than Washington’s statisticians seem to believe. The government’s companion survey of U.S. households hints at the same conclusion; it estimates the average 1996 work week in the nonfarm economy at close to 40 hours. That’s far short of the 51 hours reported in the Harris Poll but still considerably longer than the work week used in determining official productivity figures.

Analysis suggests that underreporting of work schedules since the late 1970s has been concentrated in the services sector. The discrepancy is particularly large in the finance, insurance, and real estate (FIRE) component. Similar discrepancies are evident in wholesale and retail trade and in a more narrow category that includes a variety of business and professional services. By contrast, recent trends in both establishment- and household-based measures of work schedules in manufacturing, mining, and construction–segments of the U.S. economy that also have the most reliable output figures–tend to conform with each other.

So what does all this mean for aggregate productivity growth? To answer this question, I have performed two sets of calculations. The first is a reestimation of productivity growth under the work-week assumptions of the Labor Department’s household survey. On this basis, productivity gains in the broad services sector (a nonmanufacturing category that also includes mining and construction) averaged just 0.1 percent annually from 1964 through 1996, about 0.2 percentage points below the anemic 0.3 percent trend derived from the establishment survey. In light of the results of the Harris Poll, this is undoubtedly a conservative estimate of the hours-worked distortion in productivity figures. Indeed, presuming that work schedules in services move in tandem with the results implied by the Harris Poll, our calculations suggest that service sector productivity growth is actually lower, by 0.8 percentage points per year, than the government’s official estimates.

A final measurement critique of the productivity results also bears mentioning: the belief that statistical pitfalls can be traced to those sectors of the economy (such as services) where the data are the fuzziest. This point of view has been argued by Alan Greenspan and detailed in a supporting paper by the Federal Reserve’s research staff. In brief, this study examines productivity results on a detailed industry basis and concludes that because the figures are generally accurate in the goods-producing segment of the economy, there is reason to be suspicious of results in the service sector, especially in light of well-known CPI biases in this segment of the economy. But it may simply be inappropriate to divide national productivity into its industry-specific components. Distinctions between sectors and industries are increasingly blurred by phenomena such as outsourcing, horizontal integration, and the globalization of multinational corporations.

We have performed some simple calculations that suggest that productivity growth would be lower in manufacturing and higher in services if a portion of the employment growth in the surging temporary staffing industry were correctly allocated to the manufacturing sector rather than completely allocated to the services sector as is presently the case. (We start with the assumption that about 50 percent of the hours worked by the help supply industry provides support for manufacturing activities, which is broadly consistent with anecdotal reports from temporary help companies. That 50 percent can then be subtracted from the services sector, where it currently resides in accordance with establishment-based employment accounting metrics, and added back into existing estimates of hours worked in manufacturing. This knocks about 0.5 percentage points off average productivity growth in manufacturing over the past six years and boosts productivity growth in the much larger service sector by about 0.1 percentage point over this same period.)

All this is another way of saying that there are two sides to the productivity measurement debate. Those focusing on the output side of the story have argued that productivity gains in services may have been consistently understated in the 1990s. Our work suggests that the biases stemming from under-reported work schedules could be more than offsetting, leaving productivity trends even more sluggish than the official data suggest.

A new cost structure

Yet another element of the productivity paradox is the link between America’s open-ended commitment to technology and the flexibility of corporate America’s cost structure, especially in the information-intensive services sector. For most of their long history, U.S. service companies were quintessential variable-cost producers. Their main assets were workers, whose compensation costs could readily be altered by hiring, firing, and a relatively flexible wage-setting mechanism.

Now, courtesy of the Information Age and a heavy investment in computers and other information hardware, service companies have unwittingly transformed themselves from variable- to fixed-cost producers, which denies this vast segment of the U.S. economy the very flexibility it needs to boost productivity in an era of heightened competition. Moreover, the burdens of fixed costs are about to become even weightier thanks to the outsized price tag on the Great Year 2000 Fix–perhaps $600 billion–yet another example of dead weight in the Information Age.

A few numbers illustrate the magnitude of the new technology bet and its impact on business cost structures. Between 1990 and 1997, corporate America spent $1.1 trillion (current dollars) on information technology hardware alone, an 80 percent faster rate of investment than in the first seven years of the 1980s. At the same time, the information technology share of business’s total capital stock (expressed in real terms) has soared from 12.7 percent in 1990 to an estimated 19.1 percent in 1996. The recent surge in this ratio is a good approximation of the ever-expanding increases in fixed technology costs that are now viewed as essential in order to keep transaction-intensive and increasingly global service companies in business.

To be sure, a large portion of these outlays is written off quickly. Nevertheless, with their tax-based service lives typically clustered over three to five years, about $460 billion still remains on the books, which is a little over 40 percent of cumulative information technology spending since 1990. This is hardly an insignificant element of overall corporate costs; by way of comparison, total U.S. corporate interest expenses are presently running at about $400 billion annually.

Let me also stress that the wisdom of corporate America’s enormous bet on information technology has never been tested by a cyclical downturn in the economy. Under such circumstances, it is highly unlikely that U.S. businesses will prune those costs aggressively. After all, information technology is now widely viewed as a critical element of the business infrastructure, essential to operations and therefore not exactly amenable to the standard cost-cutting typically employed to sustain profit margins. Lacking the discretion to pare the technology, managers will be under all the more pressure to slash labor costs. Yet that strategy may also be quite difficult to implement in the aftermath of the massive head-count reductions made earlier in the 1990s.

Whenever it comes, the next recession will be the first cyclical downturn of the Information Age. And it will find corporate America with a far more rigid cost structure than has been the case in past recessions. This suggests that the next recession might also take a far greater toll on corporate earnings than has been the case in past recessions, a possibility that is completely at odds with the optimistic profit expectations that are currently being discounted by an ever-exuberant stock market. In short, the next shift in the business cycle could well provide an acid test of the two competing scenarios of the productivity-led renaissance and the technology paradox. Stay tuned.

Cost cutting vs. productivity

Let me propose an alternative explanation for the so-called earnings miracles of the 1990s. I am not one of those who believes that explosive gains in the stock market over the past three years are a direct confirmation of the (unmeasured) productivity-led successes in boosting corporate profit margins. A better explanation might be an extraordinary bout of good old-fashioned slash-and-burn cost cutting. Consider the unrelenting surge of downsizing that were a hallmark of the 1990s. Whether such strategies took the form of layoffs, plant closings, or outsourcing, the result was basically the same–companies were making do with less. Sustained productivity growth, by contrast, hinges on getting more out of more–by realizing new synergies between rapidly growing employment and the stock of capital. That outcome has simply not been evident in the lean and mean 1990s. As can be seen in Figure 2, recent trends in both hiring and capital accumulation in the industrial sector have been markedly deficient when compared with the long sweep of historical experience.

How can this be? Doesn’t the confluence of improved competitiveness, upside earnings surprises, and low inflation speak of a nation that is now realizing the fruits of corporate productivity? Not necessarily. In my view, it is impossible to discern whether such results have been driven by intense cost cutting or by sustained productivity growth. The evidence, however, weighs heavily in favor of cost cutting. Not only is there a notable lack of improvement in official productivity results for the U.S. economy, there is also persuasive evidence that corporate fixation on cost control has never been greater.

This conclusion should not be surprising. It simply reflects the extreme difficulty of raising white-collar productivity. This intrinsically slow process may be slowed even further if the challenge is to boost the cerebral efficiencies of knowledge workers. And slow improvement may not be enough for corporate managers (and shareholders) confronting the competitive imperatives of the 1990s. As a result, businesses may have few options other than more cost cutting. If that’s so, then the endgame is far more worrisome than the one implied in a productivity-led recovery. In the cost-cutting scenario, companies will become increasingly hollow, lacking both the capital and the labor needed to maintain market share in an ever-expanding domestic and global economy.

Indeed, there are already scattered signs that corporate America may have gone too far down that road in order to boost profits. Recent production bottlenecks at Boeing and Union Pacific are traceable to the excesses of cost cutting and downsizing that occurred in the late 1980s and early 1990s. In a period of sustained growth in productivity, corporate growth is the antidote to such occurrences. But in a world of unrelenting cost cutting, bottlenecks will become far more prevalent, particularly with the rapid expansion of global markets. And then all the heavy lifting associated with a decade of corporate restructuring could quickly be squandered.

The fallacy of historical precedent

Yet another flaw in the productivity revivalist script is the steadfast belief that we have been there before. The New Paradigm proponents argue that the Agricultural Revolution and the Industrial Revolution were part of a continuum that now includes the Information Age. It took a generation for those earlier technologies to begin bearing fruit, and the same can be expected of the long-awaited technology payback of the late 20th century. Dating the advent of new computer technologies to the early 1970s, many are quick to argue that the payback must finally be at hand.

This is where the parable of the productivity-led recovery really falls apart. The breakthroughs of the Agricultural and Industrial Revolutions were all about sustained productivity growth in the creation of tangible products by improving the efficiency of tangible production techniques. By contrast, the supposed breakthroughs of the Information Age hinge more on an intangible knowledge-based product that is largely the result of an equally intangible human thought process.

It may well be that white-collar productivity improvements are simply much harder to come by than blue-collar ones. That’s particularly true in the the new global village. It’s a cross-border operating environment that also crosses multiple time zones and involves new complexities in service-based transactions. That’s certainly the case in the financial services industry, where increasingly elaborate products with multidimensional attributes of risk (such as currencies, credit quality, and a host of systemic factors) are now traded 24 hours a day. In the Information Age, much is made of the exponential growth of computational power. I would argue that the complexity curve of the tasks to be performed has a similar trajectory, suggesting that there might be something close to a standoff between these new technological breakthroughs and the problems they are designed to solve.

The issue of task complexity is undoubtedly a key to understanding the white-collar productivity paradox. The escalating intricacy of knowledge-based work demands longer schedules, facilitated by the portable technologies that make remote problem-solving feasible and, in many cases, mandatory. Whether the time is spent surfing the Web, performing after-hours banking, or hooking up to the office network from home, hotel, or airport waiting lounge, there can be no mistaking the increasingly large time commitment now required of white-collar workers.

Nor is it clear that information technologies have led to dramatic improvements in time management; witness information overload in this era of explosive growth in Web-based publishing, a phenomenon that far outstrips the filtering capabilities of even the most powerful search engines. The futuristic saga of the productivity-led recovery fails to address the obvious question: Where does this incremental time come from? The answer is that it comes increasingly out of leisure time, reflecting an emerging conflict between corporate and personal productivity.

This is consistent with the previous critique of productivity measurement. Productivity enhancement, along with its associated improvements in living standards, is not about working longer but about adding value per unit of work time. This is precisely what’s lacking in the Information Age.

Paradigm lost?

There can be no mistaking the extraordinary breakthroughs of the new technologies of the Information Age. The faster, sleeker, smaller, and more interconnected information appliances of the late 1990s are widely presumed to offer a new vision of work, leisure, and economic and social hierarchies. But is this truly the key to faster productivity growth for the nation?

My answer continues to be “no”; or possibly, if I don my rose-colored glasses, “not yet.” Improvements in underlying productivity growth are one of the most difficult challenges that any nation must confront. And increasing the productivity of knowledge workers in particular is going to be far more difficult to achieve than previous productivity breakthroughs for blue-collar and farm workers.

That takes us to the dark side of America’s technology paradox. Rushing to embrace the New Paradigm entails a real risk of overlooking the most basic and powerful benefit of an improvement in overall productivity: an increase in the national standard of living. On this, the evidence is hardly circumstantial: more than 15 years of virtual stagnation in real wages, an unprecedented widening of inequalities in income distribution, and a dramatic shift in the work-leisure tradeoff that puts increasing stress on family and personal priorities. At the same time, there can be no mistaking the windfalls that have accrued to a small slice of the U.S. population, mainly those fortunate managers, executives, and investors who have benefited from the corporate earnings and stock market bonanza of the 1990s.

In the end, I continue to fear that much of the debate over the fruits of the Information Age boils down to the classic power struggle between capital and labor. I find it difficult to believe that corporate America can cut costs forever; there really is a limit to how far managers can take the credo of “lean and mean,” and there are signs that the limit is now in sight. I find it equally difficult to believe that workers will continue to acquiesce in a system that rewards few for the efforts of many, especially in view of the dramatic cyclical tightening of the labor market that has taken the national unemployment rate to its lowest level in 24 years. A recent upturn in the wage cycle suggests that the forces of supply and demand are now beginning to weigh in with the same cyclical verdict. All this implies that the pendulum of economic power may be starting a long-overdue swing from capital back to labor, repeating the timeworn patterns of power struggles past.

Like it or not, the New Paradigm perception of a technology-led productivity renaissance is about to meet its sternest test. That test should reflect not only the social and economic pressures of worker backlash but also a classic confrontation between cost-cutting tactics and the pressures of the business cycle. Moreover, to the extent that the technology paradox is alive and well (and that remains my view) the days of ever-expanding profit margins, subdued inflation, and low interest rates could well be numbered. Needless to say, such an outcome would come as a rude awakening for those ever-exuberant financial markets that are now priced for the perfection of the Long Boom.

Making Guns Safer

Children are killing children by gunfire. These deaths are occurring in homes, on the streets, and in schools. When possible solutions to this problem are discussed, conversation most often focuses on the troubled youth. Interventions involving conflict resolution programs, values teaching, reducing violence on television, and making available after-school activities and positive role models are proposed. Although each of these interventions may provide benefits, they are, even in combination, inadequate to eliminate childhood shootings. Behavior-modification programs cannot possibly reach and successfully treat every troubled youth capable of creating mayhem if he or she finds an operable firearm within arm’s reach.

But behavior modification isn’t the only possible solution. Another intervention is now being developed: the personalized gun, a weapon that will operate only for the authorized user. Personalized guns could reduce the likelihood of many gun-related injuries to children as well as adults. They could be especially effective in preventing youth suicides and unintentional shootings by young children. Personalized guns could also reduce gun violence by making the many firearms that are stolen and later used in crime useless to criminals. Law enforcement officers, who are at risk of having their handgun taken from them and being shot by it, would be safer with a personalized gun.

About 36,000 individuals died from gunshot wounds in 1995; of these, more than 5,000 were 19 years of age or younger. Suicide is among the leading causes of death for children and young adults. In 1995, more than 2,200 people between 10 and 19 years of age committed suicide in the United States, and 65 percent of these used a gun.

Adolescence is often a turbulent stage of development. Young people are prone to impulsive behavior, and studies show that thoughts of suicide occur among at least one-third of adolescents. Because firearms are among the most lethal methods of suicide, access to an operable firearm can often mean the difference between life and death for a troubled teenager. Studies have shown a strong association between adolescent suicide risk and home gun ownership. Although the causes of suicide are complex, personalizing guns to their adult owners should significantly reduce the risk of suicide among adolescents.

Personalized guns could be especially effective in preventing teenage suicides and unintentional deaths and injuries of children.

The number of unintentional deaths caused by firearms has ranged between 1,225 and 2,000 per year since 1979. Many of the victims are young children. In 1995, the most recent year for which final statistics are available, 440 people age 19 and younger, including 181 that were under 15, were unintentionally killed with guns.

Some have argued that the best way to reduce these unintentional firearm deaths is to “gun proof” children rather than to child-proof guns. It is imprudent, however, to depend on adults’ efforts to keep guns away from children and children’s efforts to avoid guns. Firearms are available in almost 40 percent of U.S. homes, and not all parents can be relied upon to store guns safely. Surveys have documented unsafe storage practices, even among those trained in gun safety.

Stolen guns contribute to the number of gun-related deaths. Experts estimate that about 500,000 guns are stolen each year. Surveys of adult and juvenile criminals indicate that thefts are a significant source of guns used in crime. Roughly one-third of the guns used by armed felons are obtained directly through theft. Many guns illegally sold to criminals on the street have been stolen from homes. Research on the guns used in crime demonstrates that many are no more than a few years old. Requiring all guns to be personalized could, therefore, limit the availability of usable guns to adult and juvenile criminals in the illegal gun market.

Advancing technology

The idea of making a gun that some people cannot operate is not new. Beginning in the late 1880s, Smith & Wesson made a handgun with a grip safety and stated in its marketing materials that “…no ordinary child under eight can possibly discharge it.” More recently, some gun manufacturers have provided trigger-locking devices with their new guns. But trigger locks require the gun owner’s diligence in re-locking the gun each time it has been unlocked. Also, handguns are frequently purchased because the buyer believes he needs and will achieve a form of immediate self-protection. These gun owners may perceive devices such as trigger locks as a hindrance when they want the gun to be immediately available. Also, some trigger locks currently on the market are so shoddy that they can easily be removed by anyone.

Today, a number of technologies are available to personalize guns. For example, magnetic encoding has long been available for the personalization of guns. Magna-TriggerTM markets a ring that contains a magnet, which, when properly aligned with a magnet installed in the grip of the gun, physically moves a lever in the grip of the firearm, allowing the gun to fire. However, the Magna-TriggerTM system is not currently built into guns as original equipment; it must be added later. Because the gun owner must take this additional step and because the magnetic force is not coded to the gun owner, this technology is not optimal.

Another technology-touch memory-was used in 1992 by Johns Hopkins University undergraduate engineering students to develop a non-firing prototype of a personalized gun. Touch memory relies on direct contact between a semiconductor chip and a reader on the grip of the gun. A code is stored on the chip, which is placed on a ring worn by the user. The gun will fire only if the reader recognizes the proper code on the chip.

Another type of personalized gun employs radio frequency technology, for which the user wears a transponder imbedded in a ring, a watch, or pin attached to the user’s clothing. A device within the firearm transmits low power radio signals to the transponder, which in turn “notifies” the firearm of its presence. If the transponder code is one that has previously been entered into the firearm, the firearm “recognizes” it and is enabled. Without the receipt of that coded message, however, a movable piece within the gun remains in a position that mechanically blocks the gun from firing. One major gun manufacturer has developed prototypes of personalized handguns using radio frequency technology and expects to market these guns soon.

The personalization method of the near future appears to be fingerprint reading technology. A gun would be programmed to recognize one or more fingerprints by use of a tiny reader. This eliminates the need for the authorized user to wear a ring or bracelet. Regardless of the technology that is ultimately chosen by most gun manufacturers, several gun magazines have advised their readers to expect personalized handguns to be readily available within the next few years.

Prices for personalized handguns will be higher than for ordinary handguns. The Magna-TriggerTM device can be fitted to some handguns at a cost of about $250, plus $40 for the ring. One gun manufacturer originally estimated that personalizing a handgun would increase the cost of the gun by about 50 percent; however, with the decreasing cost of electronics and with economies of scale, the cost of personalization should substantially decrease. Polling data show that the gun-buying public is willing to pay an increased cost for a personalized handgun.

Regulating gun safety

Most gun manufacturers have not yet indicated that they will redesign their products for safety. When the manufacturers of other products involved with injuries were slow to employ injury prevention technologies, the federal government forced them to do so. But the federal government does not mandate safety mechanisms for handguns. The Consumer Product Safety Commission, the federal agency established by Congress to oversee the safety of most consumer products, is prohibited from exercising jurisdiction over firearms. However, bills have been introduced in several states that would require new handguns to be personalized. Regulation and litigation against firearms manufacturers may also add to the pressure to personalize guns.

Important legislative and regulatory efforts have already taken place in Massachusetts. The state’s attorney general recently promulgated the nation’s first consumer protection regulations regarding handguns. The regulations require that all handguns manufactured or sold in Massachusetts be made child-resistant. If newly manufactured handguns are not personalized, then stringent warnings about the product’s danger must accompany handgun sales. Bills affecting gun manufacturers’ liability have also been introduced in the state legislature. The proposed legislation imposes strict liability on manufacturers and distributors of firearms for the deaths and injuries their products cause. Strict liability would not be imposed, however, if a firearm employs a mechanism or device designed to prevent anyone except the registered owner from discharging it.

A bill recently introduced in California would require that concealable handguns employ a device designed to prevent use by unauthorized users or be accompanied by a warning that explains the danger of a gun that does not employ a “single-user device.” A bill introduced in the Rhode Island legislature would require all handguns sold in the state to be child-resistant or personalized.

To aid legislative efforts that would require personalized guns, the Johns Hopkins Center for Gun Policy and Research has developed a model law entitled “A Model Handgun Safety Standard Act.” Legislation patterned after the model law has been introduced in Pennsylvania, New York, and New Jersey.

One objection to legislation requiring handguns to be personalized is that the technology has not yet been adequately developed. But in interpreting the validity of safety legislation, courts traditionally have held that standards need not be based upon existing devices. For example, in a 1983 case involving a passive-restraint standard promulgated pursuant to the National Traffic and Motor Vehicle Safety Act of 1966, the Supreme Court ruled that “…the Act was necessary because the industry was not sufficiently responsive to safety concerns. The Act intended that safety standards not depend on current technology and could be `technology-forcing’ in the sense of inducing the development of superior safety design.”

The model handgun safety legislation mandates the development of a performance standard and provides an extended time for compliance-two features the courts have said contribute to the determination that a standard is technologically feasible. A performance standard does not dictate the design or technology that a manufacturer must employ to comply with the law. The model law calls for adoption of a standard within 18 months of passage of the law, with compliance beginning four years after the standard is adopted.

Legislative efforts to promote the use of personalized guns can be complemented by litigation. For some time, injury-prevention professionals have recognized that product liability litigation fosters injury prevention by creating a financial incentive to design safer products. One lawsuit is already being litigated in California against a gun manufacturer in a case involving a 15-year-old boy who was shot unintentionally by a friend playing with a handgun. The suit alleges that, among other theories of liability, the handgun was defective because its design did not utilize personalization technology. Additional cases against gun manufacturers for failure to personalize their products can be expected.

Firearm manufacturers need to realize the benefits of personalized guns. The threat of legislation, regulation, or litigation may be enough to convince some manufacturers to integrate available personalization technologies into their products. When personalized guns replace present day guns that are operable by anyone, the unauthorized use of guns by children and adolescents will decrease, as will the incidence of gun-related morbidity and mortality.

Cloning news

Gina Kolata, a science reporter for the New York Times, was the first to write about the cloning of Dolly in a U.S. newspaper. Cloning and its media coverage are the signal events in a story that is still far from complete. This book chronicles the first six months of that story in Kolata’s characteristically lucid prose. It does not include more recent events such as physicist Richard Seed’s quasi-credible claim to be seeking private capital to clone a human, the Food and Drug Administration’s (FDA’s) assertion of jurisdiction over any contemplated human cloning experiments, lingering doubts about whether Dolly was really cloned from an adult cell’s nucleus, and rekindled interest in national legislation.

Kolata had early and unusually good access to Ian Wilmut and others who were quickly overwhelmed by a storm of media attention. She was well ahead of the pack and indeed was partially the cause of that storm, and thus had the opportunity to visit the Roslin Institute in Scotland (Dolly’s birthplace and home to the pertinent laboratory work), meet Dolly, and interview the principal characters when their reactions were fresh and spontaneous and before their answers were refined by repetition. So the story is here in full.

Scientists will wince at the apocalyptic nonsense laid bare in chapter one. For example, biochemist-turned-bioethicist Leon Kass implies that cloning will initiate an ineluctable slide down a muddy slope into a brave new world where “the future of humanity may hang in the balance.” That’s a lot of responsibility for one sheep to bear, and it’s no wonder the Roslin Institute keeps Dolly indoors where she can be protected from these crazy humans. Bishop Maraczewski, testifying before Congress on behalf of the National Conference of Bishops, asserts “there is no evidence that humans were given the power to alter their nature or the manner in which they come into existence.” This will be news to obstetricians, even those eschewing new techniques of assisted reproduction that remain off limits under Roman Catholic doctrine.

For the most part, Kolata lets loose-lipped bioethicists and scientific publicity hounds do most of the ranting-which they were more than willing to do-and holds her own more temperate analysis close to the chest until later in the book. She does lapse into an inapposite analogy, comparing cloning research with the atomic bomb and invoking Oppenheimer’s dictum that physicists “have known sin.” By implication, she casts a moral pall over biotechnologists (although her sympathetic treatment of Wilmut clearly indicates that she wouldn’t call him a sinner). Kolata lamely justifies this analogy by noting that “cloning is complex, multilayered in its threats and promises.” Readers in Hiroshima and Nagasaki could be forgiven a flash of anger at the suggestion that several hundred thousand deaths are comparable to the speculative harms of cloning. Pioneers of cloning may scratch their heads at a facile analogy between producing drugs cheaply and copiously in sheep’s milk and the frantic wartime effort to construct a monster bomb.

Gauging public reaction

Readers should not let themselves be put off, however, because Kolata is merely setting up her story and highlighting the feature that most distinguishes animal cloning from other fields-the long-standing public unease with human cloning. Public anxiety about biological tinkering goes back to Mary Shelley’s Frankenstein, and the announcement of Dolly’s birth inflamed a debate about human cloning that had been smoldering for decades. The story is more about the public reaction to a real but incremental scientific advance than it is about the underlying science that Kolata so lovingly describes (it is clear where her heart is). After the opening chapter, the book’s pace quickens considerably, the prose shifts from purple to true blue, and Kolata hits her stride. We gain insight into how major public media outlets interact with the science journals. The story then turns to parallel analysis of twin themes-the underlying science and the social debate about human cloning. Public concern blends fascination with whiz-bang technology, expectations of health and agricultural benefits, and discomfort about whether society is prepared to prudently manage a deluge of new knowledge and new technologies.

The science is meticulously described, with a quick survey of human embryology and experimental attempts to clone frogs and mammals. One sidebar is the tragicomic saga of Karl Ilmensee and the still-unresolved debate about whether he succeeded in cloning mice more than a decade ago. There are few scientific mistakes, and those that remain seem much more the result of telescoping too much too quickly rather than fundamental misunderstanding. For example, Wilmut is indirectly quoted as saying that animal cloning might be valuable in creating animal models of cystic fibrosis, which of course it cannot. Cloning could only make many genetically identical animals once the animal model was created.

Although the account of how cloning is linked to the origins of contemporary bioethics is brief, the scholarship is excellent. Kolata unearthed James D. Watson’s obscure 1971 statement to a congressional panel exhorting broad public governance of human cloning, which was an early event in the efforts of Senators Walter Mondale and Edward Kennedy to create a national bioethics commission. Her review of the struggle of the current National Bioethics Advisory Commission to develop policy options for the president regarding cloning is covered in its essentials, without getting bogged down in bureaucratic arcana. This keeps the story line clean but it also limits attention to policy.

Cloning ties into the abortion debate and the related controversy over embryo research. It starkly exposes the conflicting impulses of scientists to leave avenues of inquiry open and of theologians and critics of science and technology to rein some areas in. Scientists argue vigorously against drawing lines in the sand that research should not cross (unless it clearly risks harm); others argue that preserving the sacred requires defining the profane. This is a perennial battle of conflicting presumptions. Scientists ask “why should we stop?” Cloning opponents ask “why should we let you continue?” Judgment depends on who carries the burden of proof. Probing these presumptions reveals some underlying differences. Scientists do not offer a strong, concrete reason to cross the line; cloning critics are far more forceful in asserting that human cloning is immoral than they are in explaining why. The scientific and practical benefits held out for cloning are a long way off and may or may not justify its use. The strongest argument for pursuing cloning research-that “roadblocks to inquiry might keep us from finding unexpected facts”-stems from the long history of technological surprises that have arisen from research. Among those calling for a cloning ban, the moral certainty that cloning is wrong is surprisingly bereft of justification. Some critics presume that narcissism is the only plausible reason to want a human clone; others invoke the vague notion of hubris, which is more of a general restatement of why we worry about new technologies than a precise analysis of concerns about this particular technology. So we are left with an unsatisfying situation. Scientists have offered no compelling reason to clone a human being and agree that any safe and reliable means for doing so will take years to develop. Yet confident moralists urge a ban in the meantime without a clear explanation of why human cloning is inherently immoral or will ineluctably lead to social harm. It seems to be a battle of knee-jerk political reflexes working at cross purposes: “research is good” versus “cloning is wrong,” with neither faction being particularly persuasive.

A legislative ban on cloning was a strong possibility in the fevered months after Dolly’s birth and got a further impetus in June 1997, when the National Bioethics Advisory Commission recommended a legislatively crafted moratorium. Momentum for legislative action then stalled until Seed made his proposal to raise private capital for human cloning.

Soon after the Seed story broke, FDA announced that it would review the safety and efficacy of cloning technology. A very low success rate and high likelihood of birth defects are acceptable in cloning experiments on sheep and cows but clearly unacceptable in humans. FDA’s assertion of jurisdiction means human cloning in the United States, federally funded or not, is a long way off because demonstrating safety will be impossible unless and until the methods are orders of magnitude safer and more reliable. As a practical mater, FDA has blocked the road for the foreseeable future. Yet because the real issue is about morality and its public justification, this has not stopped the political response.

Bills introduced in 1997 by Rep. Vernon Ehlers (R-Mich.) and Sen. Christopher Bond (R-Mo.) both suffered from definitional wobble and caused fear that they would inadvertently ban more than the creation of cloned people. The fate of those bills seemed to be quiet oblivion until Seed’s announcement revived them and spawned another group of legislative proposals. A bill introduced by Sen. Ben Nighthorse Campbell (R-Colo.) proposed to ban creation of a human clone as well as human embryo research, thus using cloning as a backdoor to address another perennial controversy. Early in 1998, Senate Republican leaders drafted and attempted to pass a bill, but scientific professional societies mobilized to thwart that effort, which would have banned nuclear transfer from an adult human somatic cell into an egg cell. Senators Diane Feinstein (D-Calif.) and Edward Kennedy (D-Mass.) introduced a bill proscribing implantation of an embryo derived from such nuclear transfer, moving the trigger point from creating a cloned cell to implanting an embryo with intent to create a baby.

Just as significant as the specifics of what would be banned, the Republican bill would create a new bioethics commission, whereas the Democratic bill calls for the existing bioethics commission to revisit cloning. This is a hint that the debate is as much about how to fill the void of public justification for policy choice in the face of moral pluralism as it is about the technical safety and efficacy of cloning techniques. Whether and when any such bills become law is highly uncertain, but the political story is as engrossing and complex as the cloning story itself.

Kolata’s account is strong on science and the media’s role. Those wanting details about the political story will have to await a more detailed book by an author more focused on policy. The main events are covered here, but the book ends long before the policy story is complete, and there is little analysis of the dynamics of policy formulation. Later authors will no doubt focus on the branches of government and how scientists and bioethicists have influenced them. But Kolata did not set out to write a book about policy. She clearly wanted to produce an accessible book for the general public about why people might care about cloning, not what we should do about it; and on these terms, she succeeds admirably.

Implementing the Kyoto Protocol

The Kyoto Protocol to the United Nations Framework Convention on Climate Change is an agreement of historic proportions. Finally, the world is treating global warming seriously. The protocol could put us on a course that is less polluting, less damaging to agriculture and the international economy, and less threatening to human health. However, the protocol as written forces nations and industries into a crash program to slow global warming by dramatically reducing carbon dioxide emissions by 2010. The cost to enact the short-term plan will be unnecessarily excessive, and will actually make it more difficult to reach the fundamental emissions reductions required to stabilize the atmosphere for generations to come. The very same slowdown of global warming can be achieved more effectively and for far less cost, however, if a smarter implementation policy with a longer-term view is crafted. Emissions reductions would be phased in over more years, in parallel with the natural replacement of aging equipment, placing much less of a burden on industries and governments worldwide.

Thankfully, implementation of the protocol is far from a fait accompli. There will be a series of meetings leading up to the fourth Conference of the Parties in Buenos Aires in November. There, details are to be decided that will determine the timetable of actions that industries and governments must take. Before any more political momentum builds to ram the plan through, Congress and the world’s governments should stop and consider a longer-term plan that can set the world on a more sensible and effective course.

Global warming guaranteed

Climate change is a long-term problem that can only be addressed adequately with a long-term outlook. The overall aim of the Framework Convention on Climate Change is to stabilize atmospheric concentrations of the so-called greenhouse gases at levels that will not be detrimental to human life or the environment. In implementing the framework convention, the Kyoto Protocol has been hailed because it establishes emissions-reduction targets for each industrialized country, a system for emissions trading among countries, projects between industrialized and developing countries, and a fund for developing country action. But what has been lost in the hoopla is that, even with the protocol’s tough limits, the concentration of greenhouse gases in the atmosphere will have doubled by the end of the 21st century.

Ice core samples drilled in Greenland and Antarctica indicate that atmospheric concentrations of carbon dioxide were fairly constant at roughly 280 parts per million by volume (ppmv) before the beginning of the Industrial Revolution. They have risen to about 360 ppmv today. The protocol calls for industrialized nations to reduce their carbon dioxide emissions to 5 percent below 1990 levels between the years 2008 and 2012–I use the median date of 2010. But even then, the world will still be pumping substantial amounts of carbon dioxide into the atmosphere every year.

The key fact that is overlooked is that it takes hundreds of years for an injection of carbon dioxide into the atmosphere to dissipate. Although each atom of carbon in the atmosphere exchanges with the biosphere on an average of every four years, the atmosphere still exhibits the results of emissions from the early Industrial Revolution. Just as important, the concentration is cumulative. Even if all the world’s nations cut annual emissions to 5 percent below 1990 levels by 2010, and held them there into the future, the atmospheric concentration of carbon dioxide would continue to rise. Furthermore, since developing country emissions are almost certain to exceed those from industrialized nations soon due to population growth and economic expansion, full implementation of the Kyoto protocol without additional measures would have little impact on the total accumulation of carbon dioxide in the atmosphere after a few decades. To keep atmospheric carbon dioxide at its current concentration is virtually impossible, even if the world’s economies were drastically altered. The conclusion: Even with extreme short-term sacrifice, we are already committed to doubling the pre-industrial concentration. We will ultimately have to adjust to a warmer world.

The good news is that a doubling of carbon dioxide, though seemingly dramatic, is manageable, as we will see. It is not clear, however, how far beyond doubling we can go without triggering major environmental changes. But even if emissions rates were to increase somewhat over recent decades — a perhaps unavoidable outcome if populations and economies grow, increasing the emissions from power plants, vehicles, and heating and cooling buildings, the greatest contributors — it would take at least until 2150 to quadruple pre-industrial levels. And that would probably require returning to a coal-based energy system, another highly unlikely scenario, given industrial changes of the last 150 years. Doubling the atmospheric concentration of carbon dioxide is virtually guaranteed, while quadrupling it, or perhaps a bit more, seems to be the upper limit.

Given these boundaries, a realistic goal of international action would be to curb emissions to ensure that no more than a doubling of pre-industrial atmospheric concentrations occurs before the latter part of the 21st century. So far, the Kyoto agreement seems on target.

Is a doubling tolerable?

Before we consider how fast the world must act in order to hold concentrations to a doubling, let us consider if this level is tolerable.

Among the effects of pumping carbon dioxide into the atmosphere, two are particularly significant for climate change. First, it raises the average global temperature. This, in turn, changes the energy level in the atmosphere and oceans, which alters the earth’s hydrologic cycle–the closed system of precipitation to earth and evaporation back to the skies.

There is direct, compelling evidence from measurements of bore holes in rock from many parts of the world that the 20th century has been substantially warmer than recent centuries. Not only has this century been the warmest of the last five, but the rate of temperature change is four times greater than that for the four previous centuries. These measurements show the present-day mean temperature to be a little more than 1.0°C warmer than five centuries ago. Of this change about half has occurred in this century alone.

There is also compelling evidence that the hydrologic cycle has increased in intensity this century, mostly due to rising global temperature. This could have more immediate and far-reaching environmental, economic, and social impacts than elevated temperature alone. For example, a more intense cycle causes storms to generate more precipitation, which could raise the moisture level of farmlands, affecting crops, and increase storm runoff that leads to floods or erosion of valuable property.

Nonetheless, several studies suggest that the economies of industrialized nations could easily adapt to the climatic consequences of a doubling of pre-industrial atmospheric carbon dioxide. That is because the rate of change will be slow. The trend this century has been about 0.05°C per decade. Investment cycles for most industrial sectors are rapid enough that suitable adjustments can be made along the way. Even agriculture ought to be able to cope. It takes about eight years to bring a new cereal hybrid into production, which would be needed to adjust to differences in soil moisture, and recent experience breeding disease-resistant rice suggests that genetic engineering can reduce this time. It also will not be long before agricultural implements are able to make “on-the-fly” soil-moisture measurement and precision delivery of fertilizer to offset changes measured.

Rising warmth and moisture would also broaden the breeding grounds for insects, most notably mosquitoes, increasing their spread of diseases like malaria, dengue, and yellow fever. However, lifestyle and public health measures such as mosquito control, eradication programs, and piped water systems, which have wiped out these epidemics in the United States, will far outweigh the effects of future climate change.

Even the effort to counter a possible sea level rise of 30 inches by the end of the next century is not likely to be excessive. In urban and industrial locations, the cost of protective sea walls will be worth it. Elsewhere the coastline can be left to find its new level. The previously valuable property on the water’s edge will be replaced by formerly inland property that becomes newly valuable because it is now next to water. Obviously there will be winners and losers, but then there always have been. Urban expansion will make winners and losers much more rapidly than climate change.

For industrialized countries, then, a doubling of carbon dioxide is not an economic problem. However, a doubling would definitely change particular ecosystems, and the most important question may be whether significant disruption will result. Plant and animal life in bodies of fresh water and in wetlands will face new conditions due to higher temperatures and altered precipitation, and may have difficulty producing sufficient organic sediment and root material to adjust. Other so-called “loosely managed ecosystems” have more capacity to adjust. Ecosystems in general will be forced to reconfigure into new communities more rapidly than they have since the end of the last ice age. But research indicates they should be capable of adjusting quickly enough to maintain the grand mineral and nutrient cycles upon which life on earth depends.

The story is different for developing countries, however. In areas already sorely stressed by environmental problems that cause considerable human suffering, climate change poses a direct threat to humanity. These nations may not have the money to alter farming so it can respond to changing soil moisture, for example, or to implement widespread control and eradication programs to battle the greater spread of disease by insects. Industrialized nations will have to help meet these new demands, just as with the problems of today. Many already are, moreover, and improvements in established programs should be able to offset the new challenges. Developing nations have so far to go, as is, that the added challenges imposed by global warming represent only a marginal increase. The additional suffering will be real, but pales in comparison to that brought about by much larger forces in these countries, such as war, oppression, and poverty.

If met with good planning, a doubling of the pre-industrial concentration of carbon dioxide poses a modicum of environmental problems, and little if any economic problems. The picture changes completely, however, if we ramp up to a quadrupling of the pre-industrial concentration. The consequences could be massive. Although we cannot say exactly how much temperature will have to rise before we confront serious thresholds, we can make some educated guesses. Various models indicate that crossing the 5°C threshold will change weather patterns and soil moisture enough that U.S. agriculture would have to shift to a completely different set of cultivars. Altered rainfall patterns could combine with dramatically reconfigured ecosystems to change the nutrient flows in soils across the entire Midwest, seriously threatening the productivity of the nation’s bread basket. Studies in Texas show that bottomland hardwood forests of the coastal plain might be unable to rebound from fires or storms, affecting the viability of both preserved and commercial forests there.

At some point, continued temperature rise will trigger an even greater global disaster, as well. Salinity and temperature differentials in the oceans are important in driving what is called the deep ocean conveyer, a huge flow that sinks in the North Atlantic, runs around the African cape, and empties into the Pacific Ocean. Up-welling currents from this conveyer carry nutrients to the major fishing areas of the world. There is evidence that sufficient warming could increase precipitation in the North Atlantic basin enough to change salinity and alter ocean temperatures to a degree that would slow or even stop the conveyer. At a minimum, ocean fishing worldwide would be affected. The consequences for weather would be drastic around the world, dwarfing anything that has been dished out by the El Niño Southern Oscillation, a periodic shift in ocean temperatures and flows in the South Pacific. Though nobody knows what the consequences of stopping the deep ocean conveyer would be, it is thought that Europe would cool dramatically as the warm Gulf Stream halts.

The world may be able to adjust to a doubling of the pre-industrial concentrations of atmospheric carbon dioxide. But continued increases will eventually reach a point that can only be called “scary.”

Too much too soon

Since the world should be able to handle a doubling of carbon dioxide concentrations, but there is reason to worry when levels rise much beyond that, it seems the Kyoto Protocol’s overall aim of reducing emissions is the right goal. But ironically, the protocol’s provisions may make it harder to achieve the long-term emissions reductions that are needed to stabilize atmospheric concentrations of greenhouse gases. Whether the agreement goes down in history as a watershed event or a costly detour depends upon the details worked out at the November Conference of the Parties in Buenos Aires. Unfortunately, if the signatory nations attempt to fulfill the commitments they made to reduce emissions by 2010, they are likely to take actions that are too expensive and less effective than smarter alternatives.

To start, there is a serious question whether many nations can even hit the target. Only two of the industrialized countries that committed in 1992 to voluntarily reduce emissions to 1990 levels have done so — the UK, because it eliminated coal subsidies and switched to North Sea gas, and Germany, which shut down inefficient and uneconomical factories in the former East Germany. Both of these steps were one-time windfalls. Now, suddenly, the protocol expects all the other countries that have not been able to reduce emissions over the last seven years to reduce them to five percent or more below 1990 levels in the next twelve years.

Doing so will require concerted effort. In the United States, the Department of Energy’s Energy Information Administration projects that carbon dioxide emissions will rise 30 percent by 2010 if no actions are taken, requiring a reduction in annual emissions of about 400 million tons to achieve 1990 levels. The Environmental Energy Technologies Division at the Lawrence Berkeley National Laboratory calculates that United States emissions could be reduced about half way to 1990 levels by adopting efficiency approaches that would cost about $50 per ton of avoided carbon emissions. If the burden for this reduction were equally spread across all sources of emissions, and costs were passed on to consumers (which they would be), this would correspond to an increase in the price of gasoline of 12 cents per gallon. An American Petroleum Institute study estimated that it would cost about $200 per ton to get all the way down to the 1990 level. Even if actual emissions reductions are less expensive than these estimates, the cost will be considerable. Yet the United States committed to a greater reduction in Kyoto — 7 percent below 1990 emissions; achieving the additional reduction would cost even more.

For the rest of industry, meeting the Kyoto targets will force companies in virtually every sector to engage in massive retrofitting of equipment, to put in place technology that emits less and/or is more energy efficient, thereby reducing emissions. Retrofitting is almost always more expensive than waiting to install new equipment when old equipment has reached its natural end of life, but retrofitting is the only way to meet the short deadline. Electric utilities will have to tear out thousands of costly pieces of equipment long before their lifetimes have expired, which is normally 25 or 30 years, severely compromising their balance sheets — and our utility rates. Or they will have to add emissions control equipment to be used until current equipment is retired; when the new, less polluting equipment goes online the installed equipment will no longer be needed.

Equipment would have to be prematurely replaced in commercial and residential buildings as well, to improve the efficiency of commercial equipment, lighting, and heating and ventilation systems, and of residential heating, air conditioning, and lighting systems.

The United States’ commitment to reduce emissions more than 30 percent below what they otherwise would be in 2010 will require massive changes that hold deep implications for industrial practices and consumer’s habits. There is little evidence that our country, or any other industrialized country, is willing to make the huge investments required on the time-scale set by the Kyoto Protocol. The Clinton Administration’s answer is that tax incentives, research subsidies, and trading will enable the United States to meet its target with only “modest” price hikes on the order of 4 to 6 cents per gallon of gasoline. But this assessment assumes we can cut our abatement costs in half thanks to emissions trading with other industrial countries, and by another quarter from trading with developing countries. As we shall see, whether mechanisms will be put in place to realize these cost reductions efficiently is dubious at best. Robert Stavens, an economist and professor of public policy at Harvard’s John F. Kennedy School of Government, thinks the Administration’s claims are optimistic. “It is true that the impact can be relatively small — if this is done in the smartest possible way. But if we don’t do it that way it will cost 10 times what the administration is saying.”

There is a further problem with the strategy embodied in the Kyoto protocol. The investments required to meet the targets by 2010 are likely to use up funds that would have been used to replace aging equipment with new, more efficient, more expensive technology. If a utility or manufacturer is forced to spend precious capital on a retrofit, now, it won’t have the money to install more efficient equipment later.

And there’s the rub. Remember that curbing emissions to 5 percent below 1990 levels will not stabilize atmospheric concentrations of greenhouse gases. In particular, emissions of carbon ultimately will have to be essentially eliminated. The gain from rushing to meet the targets in 2010 is nowhere near worth the economic pain.

Better ways to reduce emissions

The question, then, is whether it is more effective to require a manufacturer or utility to spend money on retrofits to meet the short-term deadline, or to allow it to phase in more efficient equipment as old machinery becomes obsolete. Let’s consider a few examples.

The pulp and paper industry is very energy intensive and creates volumes of pollutants, including chlorine and ozone used to make paper white. Under the Kyoto Protocol, paper manufacturers or the utilities that deliver power to them would have to undertake costly actions to reduce carbon dioxide emissions. Meanwhile, a new bleaching process is being developed that does not require either chlorine or ozone, and would reduce energy consumption by 50 percent. The process has yet to be perfected, and is unlikely to be widely deployed by 2010, but it might be in wide use by 2015. If manufacturers must spend large sums now, investment in the new process will be slowed, delaying its deployment, and thus delaying a natural reduction in energy use and thus carbon dioxide emissions. If the industry did not have to divert funds to short-term reductions, it might even be able to bring the new process online sooner, which would reduce not only carbon dioxide emissions, but chlorine and ozone emissions, and lower energy costs.

In the metal casting industry, there is new technology being developed that would increase the yield of the casting process from 55 percent to 65 percent. The higher yield equates to a reduction in the amount of raw material and electricity needed for processing. Both gains translate into less carbon dioxide emissions. Again, spending money to bring this process online benefits both global warming and the manufacturer’s costs. The alternative — more short-term emissions control — raises costs for everyone.

Another good example comes from the commercial building sector. Studies show that replacing static insulation (put in walls and roofs to increase thermal resistance) with dynamic systems like computer-controlled windows and sensor-controlled ventilation systems could reduce a building’s energy load for heating and cooling by 35 to 45 percent. Even if the technology becomes standard in new buildings, new buildings comprise only two to three percent of the existing building stock in any given year. Nearly 80 percent of the commercial buildings existing in 1997 will still be occupied in 2010. Retrofitting existing buildings with dynamic insulation systems would be extremely expensive; it is far more cost effective simply to wait for the natural turnover to improve the energy consumption — and therefore carbon emissions – than to force costly retrofits now.

Electric utilities will be among the hardest hit. They will have to add costly equipment that will be used for only a few years beyond 2010. It will then be obsolete, as more efficient generation equipment becomes available. Forcing the short-term expense will rob funds the utilities could use to buy more expensive, but more efficient, equipment when the current generators must be replaced. The expense will raise costs for all customers, and jeopardize the utility’s ability to bring online more efficient equipment, which would lower costs for everyone in the long run. Such expenditures are doubly questionable when stabilizing atmospheric concentrations requires so much more in the long run. A much bigger kick would come from wider use of combined systems such as cogeneration, where waste heat from electricity generation is used to power industrial processes or heat buildings. Again, meeting short-term targets will eat up funds and slow options with ultimately bigger payoffs.

When these kinds of case studies are made in industry after industry, it becomes clear that rushing to meet the artificial Kyoto deadline of 2010 will raise short-term costs considerably, and siphon off money that could be used for smarter, long-term investments that will not only achieve the same carbon dioxide reductions, but also lower costs and the emission of other pollutants as well.

There is even less incentive for retrofits in the industrialized nations, because combustion (the principal source of energy) is expected to grow dramatically in developing countries as they become more populated and economically active. The aggregate emissions from developing countries will soon exceed those of the industrialized world. Given that, all the pain of implementing the Kyoto Protocol in industrialized nations will lead to only a pencil-line-thin deviation in the graph depicting carbon dioxide concentrations in the earth’s atmosphere.

Failure to comply

There is another reason to favor a longer-term implementation of the Kyoto Protocol: There is serious doubt as to whether countries will make the short-term investments necessary to reach the 2010 deadline.

The proponents of the protocol argue that once proper incentives exist, all kinds of cheap and even profitable ways of reducing emissions will be found. One of the most highly touted is a provision in the agreement allowing countries to trade emissions rights so they can reach the limits they agreed to in the protocol. If a country reduces emissions below its limit, it can sell rights to a country that’s over its threshold to apply the amount of the shortfall. For example, if one nation is 10 units below it’s limit, it can sell those units to a polluting country. If that country is 15 units above its limit, it would only have to reduce its emission by 5 units to meet its requirement. The proponents point to the resounding success of the United State’s sulfur emissions trading scheme used by industry, which has reduced overall emissions to about 40 percent below target at much less cost than was anticipated.

However, emissions trading in the Kyoto scheme will be much more difficult to establish. First, there is a big flaw: The trading is to be between countries. But countries don’t pollute; companies and households do. A nation wishing to create a shortfall will have to somehow get industry and homeowners to comply. And a country buying a credit will somehow have to collect the funds from all its polluting sectors. Each of these arrangement will be a practical nightmare.

There are other complications. The most efficient trading programs establish a free market in emissions permits, where private entities execute trades with minimal bureaucratic red tape. This kind of efficiency in unlikely when governments famous for bureaucracy are executing the trades. Also, each trader has to have a recognized emissions baseline so that proper shortfalls and excesses can be negotiated. However, setting baselines for different countries will be extremely difficult; how would emissions or credits from electricity generated in France but consumed in Germany be allocated? Finally, if the U.S. sulfur emissions trading scheme is any guide, there will have to be an overseer of the process that has the power to prevent governments from skewing arrangements to their benefit; establishing who or what will fill that role will be a true challenge. Preferences for organizing the trading system are likely to be in conflict, too; tradition in Europe and Japan favors greater governmental control, whereas the United States has had positive experience with private mechanisms. It is unclear how to address these issues in an international trading scheme.

The Kyoto Protocol also provides for industrialized countries to undertake joint projects with developing countries. An industrialized nation would receive a credit toward meeting its own target equal to the amount of emissions it helps reduce in a developing country. However, this “clean development mechanism” is viewed with suspicion by many in the developing world. They fear that rich industrialized countries will use their greater financial power to avoid emissions restrictions by purchasing emissions reductions from poorer countries and slowing their development in the process. The protocol provides no mechanism for addressing such concerns, nor does it specify how to determine meaningful baselines against which reductions can be measured.

Participation by developing countries is another weak plank in the protocol’s short-term platform. The lack of early commitment by key developing countries not only aggravates concerns in the United States and other industrialized countries about international competitiveness, but raises the possibility of developing countries becoming “locked-in” to more fossil fuel intensive technologies. Just as available funds in industrialized countries are likely to be used for retrofits, precluding investments in longer-term alternatives, funds in developing countries might be invested in less efficient energy production technology that will subsequently be expensive to replace. The lack of commitments from developing countries in the Kyoto Protocol must be changed in the long run. It would be much better if they begin now to develop their power and industrial infrastructure with the most efficient new technologies emerging in the coming decades. Furthermore, the U.S. Senate, which ultimately must ratify the treaty, expressed its unwillingness in a 95-to-0 vote last year to support any treaty not including full participation of developing countries. The Clinton Administration has stated that it will not submit the treaty for ratification until major developing countries have committed.

From the scientific perspective, it is not necessary to involve all developing countries. The eight countries currently producing the largest carbon dioxide emissions are the United States, Russia, Japan, Germany, the United Kingdom, Canada…and China and India. Adding a few rapidly industrializing countries, such as Brazil, Indonesia, Mexico, and South Korea, would encompass well over 60 percent of the world’s current emissions and the bulk of the its coal resources, the most carbon-intensive of fuels. Commitment by Europe, Japan, and the United States to reduce their domestic emissions with the possibility of joint implementation among this relatively small set of countries could effectively address the climate change problem.

The list of serious controversies that must be overcome in implementing the Kyoto Protocol is significant. They will make achieving a rational and effective plan in Buenos Aires a few short months from now extremely difficult.

Missing pieces

Indeed, the national negotiators at the Kyoto meeting apparently concluded that the only way to reach agreement, there, was to leave most of the difficult issues to be worked out later. Their scope is daunting. How they are worked out before and during the November Conference of the Parties will largely determine the protocol’s value.

Several crucial features have thus far been left out. These include the rules and institutions that will govern the international trading of emissions credits among industrialized countries, and the joint implementation between industrialized and developing countries. Also missing are procedures for identifying whether a nation’s actions satisfy the protocol’s rules. The equally critical criteria used to judge compliance and any penalties for noncompliance are not specified, either.

The protocol states that the methods by which a country measures emissions, and calculates whether it is meeting emission targets, are to be based on the work of the Intergovernmental Panel on Climate Change — an international group of government scientists that has assessed and summarized the science base — and the Subsidiary Body for Scientific and Technical Advice, an entity of the Framework Convention on Climate Change. These must be worked out and approved by the Conference of the Parties. The protocol also specifies that expert review teams, selected from professionals nominated by signatory nations and appropriate intergovernmental organizations, will provide a thorough and comprehensive technical assessment of all aspects of the implementation by a signatory nation. Appropriate guidelines and methods are to be determined at the Buenos Aires meeting. Clearly, there is a lot to be decided at the Conference of the Parties. Yet even if all these issues can be successfully resolved the protocol is still missing features that are crucial for successful climate control.

The first is an adequately long-term perspective. Most glaring is the absence of what might be called “futures trading” in emissions credits. There is a provision which allows a country that reduces its emissions below its commitment in one control period to “bank” that credit for application in a later period. But no credit is given for current actions that would reduce emissions in future periods. This actually creates a disincentive for investments in the massive infrastructure changes and new technologies needed to meet the long-term goal of atmospheric stabilization. For example, if the pulp and paper industry were to invest heavily in developing the new bleaching process between the years 2000 and 2012, but the process was not widely implemented until, say, 2015, it would receive no credit for that investment under the Kyoto Protocol — even though, in the long term, it would reduce emission by much more than any short-term investment could. A provision should be added to the protocol that allows the possibility of credits for investments made in one phase that will reap benefits in a later phase.

A second missing feature is an effective Secretariat. Not all issues of certification, verification, and compliance in national assessments, emissions trading, and joint implementation can simply be farmed out to external expert teams, as the protocol provides. The protocol establishes a central Secretariat, but it virtually ignores the Secretariat’s role and functions. To implement the protocol, the Secretariat will require a high level of technical expertise and considerable manpower, especially if sanctions with teeth are envisioned. The German government has agreed to host the Secretariat, but for this body to effectively implement the protocol, it will need a much more significant stature. The Conference of the Parties should add language to the agreement clarifying the functions of the Secretariat, providing for a competent staff, and establishing a funding mechanism.

Expecting the Conference of the Parties to manage all these large tasks by November borders on the ridiculous. But even after all these questions are answered, there remains yet another step. The overall structure of the implementing scheme needs an appropriate degree of flexibility. The experience of the International Atomic Energy Agency, which also operates in an international arena marked by considerable scientific and technical complexity, is instructive here. Part of its success in handling technical issues that also touch domestic, social, and economic activities is its flexible implementation regime. The international legal order for nuclear energy is a mix of legally binding rules and agreements, advisory standards, and regulations. The mix constantly changes, with today’s non-binding standards becoming tomorrow’s binding commitments. Suitable flexibility will be equally helpful in the area of climate control. Yet it remains to be included in the protocol’s provisions.

A better implementation plan

The likelihood of the Kyoto Protocol being worked out by November is slim. And that is perhaps good. A doubling of the concentration of carbon dioxide is virtually assured. So before the signatory nations spend considerable effort blazing a path to implementing the protocol, they would be much better off to slow down and consider whether a longer-term framework that achieves the same delay in doubling wouldn’t make much more economic sense.

Do not misread my argument. I am not calling for a do-nothing-now policy. I am calling for a different, smarter implementation strategy to reach the end-goal of the climate convention. The nations of the world cannot simply wait until technology ages to replace it, because at that time what is needed is unlikely to be available. What the world must do is invest, now, in technology that is far more efficient with much lower emissions and that will be ready for widespread deployment as current equipment is retired.

Neither nations nor industries have limitless resources. Instead of spending excessive amounts of money for costly, short-lived retrofits to meet an arbitrary deadline of 2010, the protocol should encourage research and development investments that will ensure a more effective, less costly fix for the longer-term problem.

Rather than get buried in the myriad bureaucratic details now required of the November meeting, the Conference of the Parties should initiate a strategic assessment of energy-intensive industries, to see where the greatest gains can be made and where technology such as paper bleaching, metal casting, and cogeneration is already waiting in the wings. The signatory nations should develop the equivalent of a critical technologies list for the utility industry, heavy industry, transportation, housing, and so on. The U.S. government’s experience in identifying critical technologies for defense, and in technology road-mapping that has proven useful for overcoming problems in the integrated circuit industry, could inform these efforts.

The Conference of the Parties should also focus on creating a system of phased-in emissions targets, rather than the shear wall of 5 percent below 1990 levels by 2010. For example, to lessen the consumption of fossil fuels, it would be much more tenable for the United States government to install a gradually increasing gasoline tax of, say, a few cents a year for 20 years, than to rapidly hike the price by 2010. And if manufacturers can plan on replacing technology over its full life span, they will be much more financially able to develop truly efficient new technology.

Finally, the Conference of the Parties should make provisions for a futures trading system in emissions credits, to inspire investment today that will benefit us all tomorrow.

Should Congress ratify the Kyoto Protocol? It depends on what happens in Buenos Aires. If the Conference of the Parties can achieve the kinds of strategic, long-term adjustments just mentioned, the protocol would be much more cost-effective. Congress should support this kind of plan and help provide the details. If, however, the Conference of the Parties simply pushes ahead on the short-term road it has already set, the world’s nations may be better off scrapping the Kyoto Protocol and starting over.

The Two Cultures Revisited

Science Wars is the title of a book of essays edited by Andrew Ross and published in 1996. The inflated premise of the book is that a fierce intellectual battle is being waged between the cultural critics of science and the scientific establishment. The essays in the book were in large part a response to Higher Superstition: The Academic Left and Its Quarrels with Science, an equally overstated jeremiad by biologist Paul R. Gross and mathematician Norman Levitt, which was published in 1994. Together, they create the impression that C. P. Snow’s two cultures have become even more remote from one another.

Gross and Levitt wrote their book to warn the scientific community that a growing network of leftist scholars in the humanities and social sciences was engaged in a subversive effort to undermine public faith in rationalism and the objectivity of science. They strung together quotes-often out of context and often misinterpreted-from a broad array of feminists, luddites, Marxists, and postmodernists to create the image of a loosely linked radical coalition determined to topple science from its pedestal of public respect. They uncovered much that was misinformed, misguided, and downright wacky. They then characterized this as the perspective of the “academic left,” though they admitted that this was in no way synonomous with what everyone else called the academic left. The implication was that science represented all that was rational and good, and that all critiques of science were lunatic attempts to undermine Western political, economic, and intellectual traditions. The only conceivable response to such an onslaught was to take whatever action was necessary to discredit the enemy.

In his introduction to Science Wars, Andrew Ross makes a valiant effort to demonstrate that the Gross/Levitt caricature is accurate. He points to a crisis in science resulting from research that finds environmental and health hazards being produced by advanced technology and thus challenges science’s strong link to military, corporate, and state interests. He goes on to link this to a crisis in industrial capitalism and claims that the defenders of science are engaged in a Science War that is an extension of the Culture War conducted by conservatives against feminism, multiculturalism, and secular humanism. In Ross’s words, “all the fine talk about the enlightened pursuit of public knowledge” is in reality a screen for the fact that “secrecy and competition are the guiding principles of research.” In his view, the claims of scientific objectivity are an attempt to conceal the fact that science’s values are actually those of an extreme form of free market capitalism. This is a caricature of Ross’s view, but he does say things that allow him to be caricatured.

The selections in Ross’s book cannot be so easily ridiculed. Political scientist Langdon Winner, English professor George Levine, sociologist Dorothy Nelkin, biologist Richard C. Lewontin, and others do not indulge in broad swipes at capitalism, and do not make ungrounded generalizations about who controls science, do not pretend that science rests on a crumbling foundation. They do, however, argue convincingly that scientists are too unwilling to examine the social, political, and philosophical aspects of their work. They make important distinctions about what scholars in other disciplines can add to an understanding of science. They refuse to be taken in by the assertion that the practice of science can be free of its cultural and economic environment, that some magical scientific method insulates it from the forces that shape all other human endeavors.

Langdon Winner points out that studies of science fall into four broad categories: 1) straightforward, and almost always favorable, descriptions of how science and technology operate in practice; 2) the application to science of analytic approaches that have been used to study other segments of society; 3)the study of the role that science plays in addressing practical social problems such as public health, environment, and defense; and 4) the work of philosophers and social theorists who examine the deep structure of ideas and institutions that form the foundation of our social system. Lumping all of these together, as Gross and Levitt do, is pointless. For example, the second approach aims to distinguish how science differs from other social institutions, and the last seeks to identify the deeper connections that link science to other contemporary institutions. The third approach (which is characteristic of Issues) ignores these more abstract questions and looks only at the practical policy choices that are influenced by science.

The critics of science are also guilty of some unwonted lumping. They commonly write of a scientific method or a scientific approach to knowledge that overlooks the variety of approaches that characterize the various fields of study. They would do well to read scientists such as Freeman Dyson and Stephen Jay Gould, who provide insights into the profound differences between the use of an inductive approach in biology and a deductive approach in physics.

The bigger picture

Most scientists most of the time could care less about these debates. Their attitude is that what is said in the English department, wherever it happens to be on campus, has no effect on science or on public opinion about science. What attracted more attention to these issues a few years ago was the fact that federal support for science was expected to fall dramatically, and Sen. Barbara Mikulski, who then chaired the Senate appropriations subcommittee that set funding levels, was demanding that research be guided not just by the interests of scientists but also by the specific needs of the nation. Fearing that their funding and their freedom to control that funding were in danger, scientists felt besieged and were prepared to do battle with all their enemies-real or imagined: Perhaps these muddleheaded academics were actually undermining public faith in science.

Other unrelated social trends were also perceived to be part of the conspiracy. New Age philosophy with its accompanying belief in nonrational ways of thinking was gaining adherents. A disturbing number of people expressed faith in alternative medicine in spite of the absence of scientific evidence to support it. Extremists in the environmental movement voiced their antipathy to all modern technology. The Unabomber was still on the loose. Assessments of public understanding of science revealed appalling ignorance, even among college graduates. To some, it appeared that science was on the ropes.

The background is different today. As the “Straight Talk” column in this issue illustrates, bipartisan support is building in Congress to not just maintain but to increase federal research spending. Sen. Mikulski (D-Md.) was replaced as chair of her subcommittee when the Republicans captured the Senate, and the discussion of “directed” basic research has subsided. New Age ideas are still common, but they are now seen as nonscientific rather than anti-scientific. Alternative medicine has become even more popular, and the federal government is now funding a small program to evaluate unconventional practices. Concern remains that pressure from a few members of Congress who are true believers will prevent the Office of Alternative Medicine from being an objective source of information, but the more public attention is devoted to evaluation of these techniques the more likely it is that accurate information will emerge. Since the death of Edward Abbey, the most articulate member of the Earth First wing of environmentalism, the influence of the radical fringe has diminished markedly.

Lack of scientific understanding among the public remains a problem, but the public itself is dissatisfied with the current state of affairs. The focus of attention this year has been on students. The most recent report from the Third International Math and Science Study revealed that U.S. high school students trail behind their contemporaries in most countries in their knowledge of math and science. This was front page news across the country and was followed by editorials and political speeches calling for renewed attention to these subjects in school. Virtually no one questioned the value of science or the importance of ensuring that all students have a fundamental understanding of science.

One of the fears that preoccupied Gross and Levitt was that the schools would soon be teaching feminist science or Afrocentric science. At the time they wrote their book, there was a proposed Afrocentric science curriculum, but it was soon laughed out of consideration. Since then, the only curricula receiving serious attention were those aimed at implementing the standards developed by the National Academy of Sciences.

Beyond caricature

Now that scientists are feeling more comfortable about their government support and now that the most ludicrous antiscience notions have receded to the fringe, what are we to make of the analyses of science being done by cultural critics, sociologists, and philosophers? The tendency among scientists is to do what they’ve always done-ignore them. Success breeds complacency. If society is supporting us, we must be doing what’s right.

Gross and Levitt allowed in their book that there was a place for a critical analysis of the interplay of social and cultural forces with scientific research. But in their view, this study would require such rigor and detailed insight that it could be conducted only by “a scientist of professional competence, or nearly so.” Apparently, only someone who was a member of the culture could understand it well enough to evaluate it reliably and objectively. Let’s hope that they don’t apply the same standard to criminology. The simple truth is that they have little respect for the analytic skills of those in the humanities or the social sciences.

The advantage of having scholars from other disciplines look at science is that they don’t look at it the same way that scientists do. Within science there is a growing awareness of interdisciplinary studies. Chemistry can make a contribution to physics or biology because chemists do not approach problems in the same way. Of course, a chemist can go only so far without the help of a physicist or biologist. This is what should also be happening in cultural studies of science.

Two factors contributed to the tendency of some people to imagine that we were traveling down the path to science wars in the early 1990s. The first was a sense of crisis among scientists about funding prospects and the second was that scientists and the critics of science had had so little interaction that it was easy for each side to imagine the worst about the other. The result was a battle between the irrational and destructive forces of the academic left and the pawns of the military-industrial complex. The outcome was of no interest to the majority of scientists and humanists that dwell in the vast middle ground between these extremes.

The current zeitgeist is much more favorable for science and provides a climate in which scientists can interact with analysts of science without feeling threatened. There has been much discussion in recent years about the need to forge a new social contract between science and the nation. The implicit contract developed by Vannevar Bush at the end of World War II is out of date for a post-Cold War world. Although many members of Congress are willing to support increased research funding, Rep. James Sensenbrenner (R-Wisc.), the chair of the House Science Committee, has said that he is not willing to support increased funding without an updated rationale for that spending. Rep. Vernon Ehlers ( R-Mich.) is leading an effort to develop that rationale. Although he is not likely to consult with cultural critics of science, it would be wise for scientists to begin a dialogue. There is probably no point in meeting with the radical fringe that alarmed Gross and Levitt, but there is no shortage of responsible and thoughtful scholars who could help scientists gain a fresh perspective on their work. In addition, the work of these scholars would be better grounded and more likely to be taken seriously if they had more direct interaction with scientists.

An encouraging sign that such an interaction might be beginning can be found in the March 1998 Atlantic Monthly. In an article entitled “Back from Chaos,” Harvard University entomologist E. O. Wilson addresses the philosophical underpinnings of science and the postmodernist challenge to the authority of science. Academic theorists are likely to consider Wilson’s endorsement of Enlightenment thinking to be quaintly old-fashioned and his hope for an intellectual consilience (which Wilson defines as “jumping together” of knowledge across disciplines) between the sciences and the humanities to be naively optimistic, but at least he is willing to take the analyses of nonscientists seriously and to seek more interaction. Besides, Wilson’s article is more likely to strike a chord with the educated public than is the type of aggressively opaque jargon that is often found in the work of the theorists.

The best work will come when scientists and nonscientists work cooperatively to develop ideas that incorporate a working knowledge of science as it is practiced today with a sophisticated understanding of modern intellectual directions. As Wilson notes, experts in all disciplines suffer from too narrow a focus. The movement toward interdisciplinary work among the sciences should be extended to include the humanities.

Forum – Spring 1998

National school standards

Cross-national comparisons, along with longitudinal studies and evaluations of large-scale interventions, remain the rare exception in the educational research literature. For that reason alone, the article by Gilbert A. Valverde and William H. Schmidt (“Refocusing U.S. Math and Science Education,” Issues, Winter 1998) on the lessons to be learned from the Third International Mathematics and Science Study (TIMSS) should command the attention of all who worry about U.S. math and science education or, more precisely, the preparation of our children for the 21st-century workplace.

Applauding the design and execution of TIMSS, however, does not ensure the utility of its lessons for education reform. The TIMSS snapshot of teaching, curriculum, textbooks, and student learning in 1995 measures key components of science and math education, not the performance of systems trying to integrate, or merely cope with, all the factors that influence teaching and learning.

Valverde and Schmidt offer inferences about what underlies the declining achievement of U.S. students relative to their age peers over the course of formal education. That our students lag those in other countries is no revelation: Students learn what those with dubious preparation and content knowledge, using materials of perfunctory content in a rigidly structured school environment, teach.

The authors’ explanation strikes at the heart of what is sacrosanctly embodied in 16,000 school districts: local control. Our interpretation is that the slippage in student achievement from grade 4 to 12 documented in TIMSS is a result of local decisionmaking, reinforced by textbook publishers’ marketing strategies that appeal to the largest number of districts by bloating topical content. This unfocused curriculum reflects unfocused systems of education that have the autonomy to make choices to the detriment of student learning.

TIMSS suggests that local control is a failed experiment. The antidote? National content standards. In the words of Valverde and Schmidt, “when well defined, a `high standards for all students’ approach can help guide policymakers in ensuring access to the resources necessary to help underprivileged schoolchildren meet these standards. In fact, this is a common justification for the `high standards for all’ approach in many TIMSS countries.”It is also the approach in some districts across America-those conducting National Science Foundation-sponsored systemwide reform reaching from resources to infrastructure to accountability for student learning.

The TIMSS results illuminate the legacy of decentralized schooling in the United States. The spending of ninety-four cents of every dollar for K-12 education is market-driven by a quirky alliance of external forces (textbook publishers and colleges of education) and internal (district and school) sovereignty over what is “the best for my kids”; choices virtually uninformed by research, evaluation, and proven innovations in educational practice.

A nation of 16,000 school districts can choose to act democratically and differently. We need not consign our children to academic underachievement and a work life of unfulfilled potential. Without change in the teaching and learning of mathematics and science, U.S. children (and not just the underprivileged ones) will become the economic underclass of the 21st-century global economy.

TIMSS is not about educational systems alone. It is about how local communities can act in the national interest to meet national expectations in the education of all its children.

DARYL E. CHUBIN

Education and Human Resources Division

National Science Foundation

Arlington, Virginia


No funds for ocean power

As William H. Avery and Walter G. Berl note in “Solar Energy from the Tropical Oceans” (Issues, Winter 1998), the Department of Energy (DOE) is no longer pursuing ocean thermal energy conversion (OTEC) research and development. In the late 1970s and early 1980s, there was a general expectation that OTEC could assist in addressing concerns about long-term energy availability. Large projects and high funding levels from the private and public sectors were envisioned but did not materialize.

DOE supported construction and validation testing of small-scale OTEC systems, and they have provided a technical data base sufficient to assist industry in judging where and when the technology could be commercialized. However, unlike their approach to other renewable technologies, industry leaders have not shown a willingness to make a substantial financial commitment to OTEC, and industry has not formed the supporting infrastructure necessary for commercial development.

In view of the above considerations and the prioritization required to meet budget constraints, DOE has not proposed funding for OTEC in recent years.

ALLAN R. HOFFMAN

Acting Deputy Assistant Secretary

Office of Utility Technologies, Energy Efficiency, and Renewable Energy

U.S. Department of Energy

Washington, D.C.

Rethinking pesticide use

In Nature Wars, Mark L. Winston argues that the public’s equally intense phobias about pests and pesticides often result in irrational pest control decisions. In many situations our hatred of pests leads to unwarranted use of pesticides that poison the environment. In other cases, our fear of pesticides prompts us to let real pest problems grow out of control. Winston’s message is that effective education of public leaders as well as typical homeowners would do much to scale back the war on pests as well as unnecessary pollution.

Winston, professor of biological sciences at Simon Fraser University in British Columbia, Canada, uses a series of well-chosen anecdotes as the primary tool for conveying his message. For example, he tells how a media-fueled battle among scientists, politicians, and environmentalists in Vancouver, British Columbia, almost resulted in a ban on aerial spraying of the bacterial insecticide Bt. The spraying was aimed at eliminating a 1991 gypsy moth infestation that threatened lumber exports to the United States. To allay public concerns, the government chose a product widely used by organic farmers. But a few environmental groups, armed with one report of a harmless Bt bacteria isolated from a patient with an eye lesion, convinced many citizens that they and their children would be sprayed with harmful bacteria. The battle over spraying went on for months.

Although some Vancouver citizens were ready to risk their lumber industry to avoid the Bt spraying, others didn’t seem at all concerned about having hard-core insecticides sprayed inside their homes. A number of exterminators interviewed by Winston said they advised clients to use slow-acting alternatives to insecticides but were bluntly rebuffed. The customers wanted every roach gone by yesterday.

How serious a problem are the chemical tactics used to battle roaches versus the pests themselves? Winston reports that about one-quarter of all U.S. homes are treated for roaches and that 15 percent of all poisonings from five major insecticides (including those used on roaches) occur in homes. Scientific evidence, on the other hand, indicates that low levels of roach infestations do not cause disease to be transmitted. (The most serious concern is allergenicity for some individuals.)

Winston’s point in illustrating this contrast is basic: As long as people don’t have to come face to face with pests, they hate insecticides and are ready to believe that farmers and public officials are allowing the air, water, and food supply to be poisoned. But when a roach or spider is spotted in a kitchen, concern over cancer and the environment often seems to vanish. Because we don’t know how much Raid it takes to kill a spider, we buy the large can. And if we call the exterminator, we want quick service, although many of us would prefer that the technician show up in an unmarked car.

Rational pest control?

Because of this schizophrenic response by the public to pests and pesticides, Winston argues that it will be difficult to institute rational pest management programs. Effective public education is critical to progress on this front, he believes, and there is evidence that education can work. Farmers once sprayed pesticides on crops without even checking to see if a pest was present. But during the past 25 years, the U.S. Agricultural Extension Service has taught many farmers how to use integrated pest management (IPM). In its purest form, IPM involves determining why a pest problem exists and what combination of changes in a farming system, including pesticide use, would reduce the problem with the lowest environmental and economic costs. In cases in which specific IPM research and education programs have been practical and based on rigorous data, significant reductions in pesticide use have resulted and farmers have saved money. Unfortunately, IPM programs have received limited funding; many more farmers still need hands-on IPM training. IPM programs will not really flourish until the public has enough education to demand more funding for training programs.

It is widely acknowledged that a negative psychological response to insects is deeply imbedded in Western culture. Winston argues that this response cannot be overcome simply by accumulating more scientific data. In the short run, he says, money spent on good public relations efforts may be much more useful in establishing rational pest control programs than money spent on research. For example, Winston believes that without the public education campaign launched by the government of British Columbia, the environmentally benign gypsy moth spraying program would ultimately have crashed and burned.

Also imbedded in modern Western culture is a general distrust of technology. How can anyone trust scientists and government officials who used to say that DDT was safe? Unfortunately, the public, environmentalists, and sometimes even Winston seem too willing to trust scientists who find monumental problems with pesticide technology. Surely there are problems associated with pesticides, but I am concerned that some scientists draw unwarranted conclusions about their magnitude.

Dubious estimate

For example, Winston argues that there is a hidden cost to society of $8 billion per year from pesticide use. He bases this estimate on an analysis by David Pimentel and his Cornell University colleagues in their 1993 book The Pesticide Question. Yet there are problems with how these estimates were made. Pimentel et al. estimate that “pesticide cancers” cost society $700 million annually, a figure derived from an unpublished study that concluded that less than 1 percent of the nation’s cancer cases are caused by pesticides. Pimentel and his colleagues base their calculation on the assumption that 1 percent of people get cancer from pesticides, but they could just as justifiably have assumed that the number was zero and that there was no cost. They further estimate a $320 million annual loss caused by adverse pesticide effects on honeybees. But this ignores the fact that without the targeted spraying of bees with selective pesticides, the beekeeping industry recently would have been devastated by acarine pests of the bees.

The largest hidden cost revealed by the Pimentel study is from the death of birds in crop fields. The authors “conservatively” estimate that 10 percent of birds in crop fields die because of pesticide use. At an estimated cost to society of $30 per bird, the total cost is $2 billion. But nature doesn’t work that way. Since organochlorine pesticides were banned, reductions in bird populations have been linked to habitat loss, not to the toxicity of pesticides to vertebrates. Why do so many policymakers (and Winston) accept the Pimentel assessment? Maybe because we love to hate pesticides.

Although Winston believes that pesticide use needs to be reduced, he differs from many environmentally concerned citizens and governments in his ideas about how it should be done. He argues that the use of genetically engineered plants such as those that produce a toxin derived from the Bt bacteria would be far preferable to the spraying of chemical pesticides. Yet a large segment of the public has a different attitude. No matter what the data show about the positive attributes of Bt, anything that is genetically engineered can grab a negative sound bite on TV and be translated into a fear-producing fact. Public concern about the potential hazards of genetically engineered crops have halted commercialization in Europe. In the United States, bioengineered plants that target specific pests without damaging beneficial insects are expected to be planted on more than 10 million acres of farmland during the summer of 1998. The worst problem foreseen by U.S. scientists is that insects will rapidly adapt to the Bt toxins and put us back at square one.

In one of the book’s last chapters, “Moving beyond Rachel Carson,” Winston criticizes the Environmental Protection Agency (EPA) for its focus on regulatory protection instead of providing alternatives to pesticides. EPA, he writes, has become bogged down with the Sisyphean task of assessing the impacts of thousands of new pesticides pouring out of the industrial research pipeline.

In addition, Winston argues that since the publication of Silent Spring, most of the research on alternatives has been conducted and assessed by academic researchers who tend to discover “scientifically interesting but impractical alternatives.” If research were conducted and judged by a more diverse group of stakeholders, including farmers, Winston says, more viable alternatives would be developed. Although not mentioned by Winston, this idea is currently being tested in a Department of Agriculture grant program called Sustainable Agricultural Research and Education, which sponsors only research involving both farmers and scientists. Farmers judge the potential utility of each proposed project before it is funded.

Perhaps the most obvious reason why we haven’t replaced pesticides is because they are so damn cheap. Winston says that if the hidden costs of pesticides could be taxed, the alternatives would become more economically appealing. But taxing the environmental and health effects of pesticide use will be a tough battle. Unlike the case of cigarettes, in which rigorous and voluminous data on societal costs have been collected, the United States still does not have good data on pesticide costs. It is amazing that 35 years after Silent Spring was published, the most often quoted estimate is the dubious Pimentel number. I agree with Winston that it would be wonderful if EPA could offer more leadership in developing alternatives to pesticides. However, I also think it would be useful if EPA could offer leadership in determining what the real costs of current pesticide use are, so that we would actually know just how critical it is to replace specific pesticides or change general patterns of pesticide use.

The Long Road to Increased Science Funding

For decades, the United States has quietly supported one of the key sources of our nation’s innovation and creativity-federal funding of basic scientific, medical, and engineering research. Federal investments in research have yielded enormous benefits to society, spawning entire new industries that now generate a substantial portion of our nation’s economic activity.

Continuation of our nation’s brilliant record of achievement in the creation of knowledge is threatened, however, by the decline in federal R&D spending. In 1965, government investment in R&D was equal to roughly 2.2 percent of gross domestic product (GDP). Thirty-two years later, that figure has dropped to just 0.8 percent. Current projections indicate that the federal R&D budget will continue to decline as a fraction of GDP.

We recently introduced the National Research Investment Act of 1998 (S. 1305), which would double the federal investment in “nondefense basic scientific, medical and precompetitive engineering research” to $68 billion over the next 10 years. The bill would authorize a 7 percent increase (in nominal dollars exclusive of inflation and GDP growth) in funding per year for the science and technology portfolios of 12 federal agencies, including the National Institutes of Health (NIH). The NIH budget would increase from $13.6 billion this year to $27.2 billion by 2008. The bill stipulates that research results be made available in the public domain, that funds be allocated using a peer-review system, and that all of the spending increases be made to fit within the discretionary spending caps established by the balanced budget agreement.

The bill is an important declaration of principles, but it will require 10 years of patient follow-through if its goals are to be realized. With the end of the Cold War, it is time for the scientific and engineering communities to articulate more forcefully the economic value of what they do. Stated bluntly, the research community will have to become organized in a way that it has not been before.

Scientists and engineers are well positioned to make their case to the taxpaying public and their congressional representatives. Universities employ 2.5 million people in this country. That’s more than the employment provided by the automobile, aerospace, and textile industries combined. Think about how influential any one of those industries is in Washington today compared to science and engineering. Moreover, universities are geographically distributed and often are the largest or second largest employer in any given congressional district. The research community is truly a sleeping giant on the U.S. political landscape.

We believe that S. 1305 represents the best opportunity to awaken that latent political force and build a bipartisan national consensus on significantly increasing the federal investment in civilian R&D over the next decade. The bill is a coalition-building vehicle and an argument that a knowledge-based society must continue to grow its most critical resource: its store of knowledge.

The next few months will be a crucial time for building support for R&D investments. Both political parties have largely cleared the decks with respect to the agendas they have been pursuing for the past several years, and recent improvements in the projected five-year revenue outlook give both parties more room to maneuver within the confines of the federal balanced budget agreement. The federal budget pie is now being sliced for the next half-decade. It is an important time, therefore, for the research community to make its case for increased investments in publicly financed research.

We are also encouraged by the policy work being carried out by our colleagues in the House of Representatives. Under the able direction of Rep. Vernon Ehlers of Michigan, and with the blessing of House leadership, the House Science Committee is drafting a policy document that is intended to guide the federal research infrastructure for the next few decades.

We believe that our efforts and those on the House side are complementary. We ask that you in the scientific community engage with us and help us to reinvigorate the federal research enterprise. We need your help to encourage your senators to cosponsor S. 1305, and the House Science Committee needs your input into its important science policy study. Together, we can ensure that our nation remains a leader in science and technology well into the next century.

From the Hill – Spring 1998

Clinton’s proposed big boost in R&D spending faces obstacles

President Clinton’s FY 1999 budget request, which projects the first surplus in nearly 30 years, calls for increased R&D investments, especially for fundamental science, biomedical research, and research aimed at reducing greenhouse gas emissions.

Under Clinton’s plan, federal R&D support would total $78.2 billion, which is $2 billion or 2.6 percent more than in FY 1998. Nondefense R&D spending would rise 5.8 percent to $37.8 billion, and defense R&D would decline by 0.3 percent to $40.3 billion.

Although the president’s proposed R&D budget request is the largest in years, there are obstacles to achieving it. First, the administration will have to convince Congress to buy into its plan to increase discretionary spending above a cap that was set as part of last year’s balanced budget agreement. The administration has established three special funds, or groups of high-priority nondefense programs, that Congress must now consider supporting. One of these, the Research Fund for America, would include most but not all nondefense R&D. (The other two funds would cover transportation and natural resources and environment programs.) The Research Fund for America would get $31.1 billion in FY 1999, up 11 percent from the previous year.

To get around the discretionary cap, the budget would fund $27.1 billion of the $31.1 billion from discretionary spending that is subject to the cap. The remaining $4 billion, essentially representing all of the requested increases for nondefense R&D, would come from new offsetting revenues outside the cap.

One problem with this approach is that $3.6 billion of the additional $4 billion for the Research Fund for America is projected to come from revenues resulting from tobacco legislation, which is a highly contentious issue in this Congress. Thus, in order to fund the requested increases for nondefense R&D programs, Congress will have to do one of four things: enact tobacco legislation that would allocate a portion of the settlement for research programs; increase discretionary spending, thus breaking last year’s agreement; increase discretionary spending and taxes to compensate; or allocate the spending under the current caps, thus requiring offsetting cuts in non-R&D programs.

The president’s nondefense R&D budget has four priorities. First, biomedical research would get a big boost. The National Institutes of Health (NIH) would receive $14.2 billion, up 8.1 percent. Of this amount, the National Cancer Institute would receive $2.5 billion.

Second, the National Science Foundation (NSF), the primary supporter of basic research in most nonbiomedical fields, would receive $2.9 billion, up 11 percent. NSF’s research directorates would each receive double-digit percentage increases, led by a 16.5 percent increase for research in the Computer and Information Science and Engineering directorate.

Third, energy research in the Department of Energy (DOE) would get increased funding, largely because of the U.S. effort to reduce greenhouse gas emissions in response to last year’s Kyoto Protocol on climate change. DOE’s nondefense R&D funding would jump by 11.1 percent to $3.8 billion, with the increase focused on developing energy-efficient technologies. DOE’s defense R&D would rise by 10.4 percent to $3.3 billion, largely because of more spending for the Stockpile Stewardship Program, which is developing computer models to measure the reliability of the nation’s nuclear weapons.

Finally, basic research spending would rise to $17 billion, up 7.6 percent. NIH would get nearly half of this ($8 billion, up 8.4 percent). The Department of Defense basic research account would increase by 6.6 percent to $1.1 billion.

Tobacco deal could be a boon to biomedical research

Members of Congress are less than optimistic about enacting comprehensive tobacco legislation this year. But if a deal is reached, it’s likely that biomedical research will be a big winner.

In June of 1997, various states and the tobacco industry negotiated an agreement that, if approved by Congress, would settle a number of lawsuits and provide the industry with future legal immunity. In exchange, cigarette and smokeless tobacco companies would pay $368.5 billion to federal, state, and local governments over 25 years. Five Senate bills that were introduced in October and November of 1997 would use the tobacco industry’s billions to support biomedical science.

In S. 1411, Sen. Connie Mack (R-Fla.) and Sen. Tom Harkin (D-Iowa) propose to increase funds for medical research by eliminating the ability of tobacco companies to deduct any lawsuit settlements from their taxes. Those funds, estimated at $100 billion, would be used to establish a National Institutes of Health (NIH) Trust Fund for Health Research. Under the terms of the bill, which is cosponsored by nine Democrats and six Republicans and supported by more than 175 organizations, NIH would decide how most of the money would be spent.

S. 1530, proposed by Sen. Orrin G. Hatch (R-Utah), citing the industry’s past “reprehensible” marketing of tobacco, calls for higher punitive damages-$398.3 billion over 25 years. Most of the money would go to various kinds of health-related research and activities at NIH. Hatch also wants a National Tobacco Research Agenda to be prepared annually by the Food and Drug Administration, the Centers for Disease Control and Prevention, NIH, and others. The agenda would outline research concerning the role of tobacco products in causing cancer, genetic and behavioral factors related to tobacco use, the development of prevention and treatment models, the development of safer tobacco products, and brain development in infants and children.

S. 1415, proposed by Sen. John McCain (R-Ariz.), would establish a Public Health Trust Fund and a National Cessation Research Program. McCain’s bill would restrict research funding to the development of methods, drugs, and devices to discourage individuals from using tobacco products and would provide financial assistance to individuals trying to quit using tobacco products.

Sen. Edward M. Kennedy (D-Mass.), in S. 1492, has proposed establishing a National Biomedical and Scientific Research Board to make grants and contracts for the conduct and support of research and training in basic and biomedical research and child health and development. In a companion bill, S. 1491, Kennedy proposes an excise tax of $1.50 per pack on cigarettes, which would bring in $20 billion a year, including $10 billion a year to fund research. Kennedy’s approach differs from other bills in that yearly revenues over an unlimited time period would be generated-$650 billion over the first 25 years. Equivalent legislation has been introduced in the House by Rep. Rosa DeLauro (D-Conn.).

Sen. Frank R. Lautenberg (D-N.J.) has introduced S. 1343, which proposes to increase the cigarette excise tax rate by $1.50 per pack. The revenues would be deposited in a public health and education trust fund. The Lautenberg bill would allocate much less money to research than would the Kennedy bill. Rep. James V. Hansen (R-Utah) has introduced a House version (H.R. 2764S) of the Lautenberg legislation.

Sensible, coherent, long-term S&T strategy sought

With the end of the era of federal budget deficits in sight, House and Senate members from both parties are calling for a doubling of federal nondefense R&D spending during the next 5 to 10 years. At the same time, however, key congressional leaders are warning that future science budgets, in the words of Rep. F. James Sensenbrenner, Jr. (R-Wisc.), “must be justified with a coherent, long-term science policy that is consistent with the need for a balanced budget.”

Last fall, Sensenbrenner, chair of the House Science Committee, and House Speaker Newt Gingrich launched a year-long study to develop “a new, sensible” long-range science and technology (S&T) policy, including a review of the nation’s science and math education programs. They tapped Rep. Vernon Ehlers (R-Mo.), vice chair of the science committee, to lead the study.

Since the end of the Cold War, many policymakers have called for a reconsideration of the role of government, industry, and academia in supporting S&T to better reflect the environment we live in today. Although numerous scholarly reports have recommended various options for a post-Vannevar Bush science policy, the House Science Committee study is the first time that Congress has attempted to address this issue since the mid-1980s.

Ehlers has stated that his goal is to prepare a “concise, coherent, and comprehensive” document by June 1998 in order to obtain the legislative support needed to move ahead. In an effort to maintain bipartisan interest, congressional staffers from both parties have been assigned to assist Ehlers.

Ehlers launched the study by conducting two roundtable discussions. The first involved almost 30 renowned scientists and policy experts; the second included young, early career scientists. During the roundtable discussions, Ehlers and his staff posed a long list of questions on topics such as encouraging industry investment; enhancing collaborative research partnerships among government, industry, and academia; and contributing to international cooperation in research. Readers can contribute to the study by providing their own answers to the questions posed to the experts. To do so, visit the science policy study Web site at <http:www.house.gov/science/science_policy_study.htm>.

Cloning debate heats up

A year after the world learned that an adult mammal had been successfully cloned, the issue of human cloning continues to be a major concern in Congress. After the initial excitement about cloning died down last year, it appeared unlikely that any legislation would be passed soon. But earlier this year Congress was spurred into action after Chicago physicist Richard Seed said he would set up a lab to use somatic cell nuclear transfer, the cloning technique used to create Dolly the sheep, to clone human beings. Seed claimed that he had some financial backing as well as an infertile couple willing to participate in the procedure.

Two important bills were introduced in the Senate. Sen. Bill Frist (R-Tenn.), a medical doctor, and Sen. Christopher Bond (R-Mo.), a longstanding opponent of human embryo research, introduced legislation prohibiting the creation of a human embryo through somatic cell nuclear transfer. Sen. Diane Feinstein (D-Calif.) and Sen. Edward Kennedy (D-Mass.) introduced a bill prohibiting the implantation of a cloned human embryo in a woman’s uterus, thus avoiding the controversial issue of human embryo research.

The Republican leadership in the Senate, seeking to address concerns about human cloning and embryo research, brought a bill equivalent to the Bond-Frist legislation swiftly to the floor for a vote. But Feinstein and Kennedy led a filibuster that could not be broken.

The House has moved more cautiously on this controversial issue. Last year, the Science Committee passed a bill sponsored by Rep. Vernon Ehlers (R-Mich.) that would ban federal funding for human cloning research. Earlier this year, the Commerce Committee held a full-day hearing on the legal, medical, ethical, and social ramifications of cloning.

At the heart of the cloning debate is the question of whether the benefits that cloning research may yield outweigh the possible risks to human morality, identity, and dignity. Not least among the concerns about cloning is the possibility that imperfect techniques could produce damaged human embryos.

Further complicating the cloning debate are recent questions about the validity of the experiment that led to the birth of Dolly. The experiment using adult cells to clone an animal has not yet been duplicated, and skepticism is rising. Ian Wilmut, the Scottish scientist who created Dolly, has promised to prove that Dolly is the real thing.

Debate over database protection continues

A House bill aimed at strengthening copyright protection for database publishers is arousing concern among some scientists, educators, and librarians. H.R. 2652, introduced by Rep. Howard Coble (R-N.C.), seeks to address various concerns raised by database publishers. Under current law, databases are entitled to copyright protection only if the information contained is arranged or selected in an original way. The effort involved in simply compiling the data isn’t enough to justify protection. Database publishers claim that they work long and hard to compile their data, regardless of how it is organized. Current law, they say, leaves them vulnerable to others who wish to duplicate their products. “Without effective statutory protection, private firms will be deterred from investing in database production,” warns the Information Industry Association.

Coble’s bill would prohibit the use of data from a database in a way that would harm the marketability of the original database. The prohibition would apply only if the data in question represented an “investment of substantial monetary or other resources.”

The bill is the latest legislative attempt to increase database copyright protection in the United States. In 1996, then-Rep. Carlos Moorhead introduced a bill that would have created a sui generis model, or a new category of intellectual property protection for databases. In 1997, the U.S. delegation to a meeting of the World Intellectual Property Organization (WIPO) also backed a new category of protection for databases.

But academic and research interests opposed the U.S. proposal, arguing that it did not adequately protect research, educational, and other “fair uses” of data and would give database publishers too much control over the data their products contained. The new protection, they claimed, might prohibitively raise access costs and impede research in data-intensive areas such as study of the human genome and climatology. The United States subsequently dropped the proposal from consideration at the WIPO meeting.

However, the European Union approved a directive calling on its member nations to implement sui generis database protection. Because the European directive would not cover databases from nations without something akin to sui generis protection on their books, the pressure on the United States has increased, resulting in Coble’s proposed legislation.

Opponents of Coble’s bill contend that although the European directive will deny the new sui generis protection for U.S. databases, existing copyright protections will remain in place, leaving U.S. companies no worse off. In addition, Jonathan Band, general counsel of the Online Banking Association, has noted that U.S. companies could still receive sui generis protection if they established subsidiaries in Europe.

Unlike the previous proposals, Coble’s bill does not follow the sui generis model. Instead, it is based on “misappropriation” of data. Although critics acknowledge that moving to the idea of misappropriation is a step in the right direction, they argue that the Coble bill does not include a strong enough exception for nonprofit, scientific, or educational uses of data. “The difficulties of identifying and implementing a suitable balance between incentives to invest and the preservation of both free competition and essential public-good uses should not be underestimated, nor should legislation be rushed in order to meet deadlines imposed by foreign bureaucrats,” said Vanderbilt University law professor Jerome Reichman at a House hearing last fall.

The House Judiciary Committee Subcommittee on Courts and Intellectual Property, which has been considering the bill, was expected to mark it up in early March 1998.

Patent Nonsense

Pending legislation threatens to tilt the intellectual-property playing field toward established market giants and greatly compound the risks for innovators and their backers. The bill’s effects would be so far-reaching that a group of more than two dozen 30 Nobel laureates in science and economics, ranging across the political spectrum from Milton Friedman to Franco Modigliani to Paul Samuelson, have taken the unusual step or writing an open letter of opposition to the U.S. Senate. They warn that the pending legislation threatens “lasting harm to the United States and the world.” According to the protesting laureates, Senate Bill (S.507), championed by Orin Hatch (R-Ut.), will “discourage the flow of new inventions that have contributed so much to America’s superior performance in the advancement of science and technology” by “curtailing the protection obtained through patents relative to the large multinational corporations.”

S. 507, a version of which has already passed in the House, is a multifaceted bill that would make many changes in the U.S. patent system, some of which have desirable aims though not necessarily the mest means. But at the heart of the bill is a provision to create “prior-user rights,” which would undermine one of the fundamental goals of patents: to encourage the publication of inventions to stimulate innovation. The patent system works by giving an inventor a temporary exclusive right to use or license to the others the invention in exchange for publishing the invention so that others can learn from it. Currently, entities that suppress, conceal, or abandon a scientific advance are not entitled to patents or other intellectual property rights. It is the sharing of a trade secret that earns a property right. But under S. 507’s prior-user rights provision, if a company elects to keep an idea secret instead of patenting it, it might still acquire significant property rights by claiming it already had the idea moving toward commercialization when someone else patented it. The company would then be allowed to use the invention without paying royalties to the patent holder. There would be no limits on volume or usage, and a business could be sold with its prior-user rights intact.

The rationale offered for prior-user rights is that because of the costs of patent protection, U.S. companies must choose carefully what they patent because it is impractical to patent every minor innovation in a product or process. Advocates raise the specter that a company that neglects to patent some small change in an important product could be prevented from using the innovation if someone later patented the idea. But former patent commissioner Donald Banner disputes this argument: “Companies don’t have to file patents on every minor invention in order to protect themselves. If something is of marginal value, all companies have to do is publish it. Then it can’t be patented and used against them.”

No need has been demonstrated for moving this bill quickly or for keeping its elements intact.

It is understandable that many lawyers for large corporations, including foreign companies, might covet prior-user rights. But prior-user rights gut the core concepts of the U.S. patent system, because they slow the dissemination of knowledge by promoting the use of trade secrets and destroy the exclusivity that allows new players to attract startup financing. That is why the laureates warn that “the principle of prior-user rights saps the very spirit of that wonderful institution that is represented by the American patent system.”

Robert Rines, an inventor and patent attorney who founded the Franklin Pierce Law Center, warns that “prior-user rights will destroy the exclusivity of the patent contract and thereby chill the venture capital available for many startups.” After taking the sizable risks of R&D and market testing, a fledgling enterprise would collapse if a market giant such as GE, 3M, Intel, Mitsubishi, or Microsoft suddenly followed up with a no-royalty product. Moreover, the litigation costs of challenging the validity of prior-user rights will favor those with deep pockets.

Consider the impact on university technology transfer. According to an MIT study, in 1995 alone, U.S. universities granted 2,142 licenses and options to license, most of them exclusive, on their patents. These licenses provide income for the universities and are often essential to the success of startup companies. Cornelius J. Pings, president of the Association of American Universities, recently wrote Senator Hatch that Hatch’s prior-user rights provision will effectively eliminate a university’s ability to exclusively license inventions. Thus, prior-user rights would dramatically interfere with the university-to-industry innovation process.

Inevitably, the loss of exclusivity in patents will also make university research more dependent on the largess of large companies and put universities in a weaker bargaining position. If universities cannot count on income from exclusive patents to help support research, they will turn to large companies that can provide direct research funding, with universities losing some control over research direction. Moreover, greater reliance on trade secrets, combined with prior user rights, will increase the incentive for industrial espionage, to which the open university environment is particularly susceptible.

There is also a constitutional question. Most legal scholars, including James Chandler, head of the Intellectual Property Law Institute in Washington, D.C., interpret the Constitution’s provision on patents as intending that the property right be “exclusive.” Prior-user rights would eliminate that exclusivity and thus lead to a potentially lengthy legal battle that would put patents on uncertain footing for an extended period.

The bill’s bulk obfuscates

One of the difficulties in talking about S. 507 is that it is not just about prior-user rights: It is a complex omnibus bill that also includes controversial provisions such as corporatizing the patent office and broadening the ability of a patentee’s opponents to challenge a patent within the patent office throughout the life of the patent.

The bill was designed not for reasoned debate of its multiple features but for obfuscation. The sponsors have modified and expanded the bill repeatedly in strategic attempts to placate opponents. Significant differences exist between the bill passed in the House and the one under consideration by the Senate. No one can be certain what would result from a House-Senate conference to merge two bills that are each more than 100 pages long.

The director of Harvard University’s Office of Technology and Trademark Licensing, Joyce Brinton, observes that although the original bill was much worse, “bill modifications to re-examination and prior-user rights have not fixed all the problems.” On balance, says Brinton, “the bill is not a good deal for universities seeking to license the fruits of their research. It should be divided into component parts that can be dealt with separately.”

Says Janna Tom, vice president for external relations for the Association of University Technology Managers, “University organizations have difficulties putting forth a broad consensus position on an entire omnibus bill packed with so many patent issues, some of which we don’t oppose, but some of which, such as prior-user rights, are not favorable to the university tech transfer community. It would be far easier to address issues one by one, but Congress seems reluctant to separate them.”

The House version of the bill (H.R. 400) also suffers from “the attempt to bundle several pieces of patent legislation into one bill,” observes Shirley Strum Kenny, president of the State University of New York at Stony Brook, with the “parts that may be beneficial to all inventors outweighed by the harmful sections.” For example, Kenny and many others support a provision of the bill that lengthens the term of patents by amending recent legislation that effectively shortened the term of many patents.

Patent policy isn’t a topic that lends itself to the usual sausage-making of Congress. Any attempt to seriously improve patent bills should begin with the ability to address its measures separately. “What we want,” says MIT’s Franco Modigliani, “is that the present version (S. 507) should be junked, should not even be presented to the Senate.”

Indeed, no need has been demonstrated for moving this bill quickly or for keeping its elements intact. The more closely one looks at the bill, the more its main thrust appears to be an effort by companies at the top to pull the intellectual property ladder up after them. The patent system may be in need of periodic updating and and fine-tuning to enhance its mission of bringing new blood to our economy, but it is too important to the economic health of the country to be subjected to illconsidered, wholesale overhaul. Repeated corrections of hasty actions will only confuse and clog the system. Let’s take the time to consider each of the proposed changes separately and deliberately.

Spring 1998 Update

Progress begins on controlling trade in light arms

In an article in the Fall 1995 Issues (“Stemming the Lethal Trade in Small Arms and Light Weapons”), I urged that increased international attention be given to the problem of unregulated trafficking in small arms and light weapons. This trade, I argued, had assumed increased significance in recent years because of its insidious role in fueling ethnic, sectarian, and religious conflict. Although heavy weapons are occasionally employed in such conflict, most of the fighting is conducted with assault rifles, machine guns, land mines, and other light weapons. Hence, efforts to control the epidemic of civil conflict will require multilateral curbs on the trade in these weapons.

Although I was optimistic that this problem would gain increased attention in the years to come, I assumed that this would be a long-term process. In recent months, however, the issue has gained considerable international visibility, and a number of concrete steps have been taken to bring it under control.

Several factors account for this rise in visibility. Although a number of major conflicts have been brought under control in recent years, the level of human slaughter produced by ethnic and sectarian violence has shown no sign of abatement. Recent massacres in Algeria and Chiapas have demonstrated, once again, how much damage can be inflicted with ordinary guns and grenades. Efforts to contain the violence, moreover,

have been stymied by recurring attacks on UN peacekeepers and humanitarian aid workers.

Recognizing that international efforts to address the threat of ethnic and internal conflict have been undermined by the spread of guns, a number of governments and nongovernmental organizations (NGOs) have begun to advocate tough new measures for curbing this trade. Most dramatic has been the campaign to ban antipersonnel land mines, which reached partial fulfillment in December 1997 with the signing of an international treaty to prohibit the production and use of such weapons. (The United States was among the handful of key countries that refused to sign the accord.)

Progress has also been made in curbing the illicit trade in firearms. In November 1997, President Clinton signed a treaty devised by the Organization of American States (OAS) to criminalize unauthorized gun trafficking within the Western Hemisphere and to require OAS members to establish effective national controls on the import and export of arms. A similar, if less exacting measure, was adopted by the European Union (EU) in June 1997, and tougher measures will be considered at the G-8 summit this summer.

Further steps were outlined in a report on small arms released by the UN in September 1997. The result of a year-long study by a panel of governmental experts, the report calls on member states to crack down on illicit arms trafficking within their territory and to cooperate at the regional and international level in regulating the licit trade in weapons.

No one doubts that serious obstacles stand in the way of further progress on this issue. Many states continue to produce light weapons of all types and are unlikely to favor strict curbs on their exports. But the perception that such curbs are desperately needed is growing.

The priority, at this point, is to identify a reasonable but significant set of objectives for such efforts. Unlike the situation regarding land mines, a total ban on the production and sale of light weapons is neither appropriate nor realistic, as most states believe that they have a legitimate right to arm themselves for external defense and domestic order. Rather, the task should be to distinguish illicit from licit arms sales and to clamp down on the former while establishing internationally recognized rules for the latter. Such rules should include a ban on sales to any government that engages in genocide, massacres, or indiscriminate violence against civilians; uses firearms to resist democratic change or silence dissidence; or cannot safeguard the weapons in its possession. And, to provide confidence in the effectiveness of these efforts, the UN should enhance transparency in the arms business by including light weapons in its Register of Conventional Weapons.

Michael Klare

Wake-up Call for Academia

Academic Duty is an important book. It provides a corrective to what Donald Kennedy, former president of Stanford University, points to as the academy’s one-sided focus: academic freedom and rights at the expense of academic obligations and responsibilities. The book is structured around chapters dealing with eight dimensions of faculty responsibilities, but it is much more than a manual on academic duties. Rather, it may be seen as a wake-up call, beckoning those in the academy to understand and take their responsibilities seriously or risk jeopardizing an already fragile institution. Indeed, Kennedy’s challenge to faculty is placed in the context of public concern, discontent, anger, and mistrust with and about higher education.

Kennedy bluntly states how important the faculty is: “In the way they function, universities are, for most purposes, the faculty.” Still, it is clear that the book is also targeting another audience. Parents, legislators, trustees, and prospective trustees will find it a first-rate introduction to what is valued by faculty and how colleges and universities are organized and governed. In an introductory section, he gives a brief overview of the history and development of higher education in the United States and addresses the contemporary situation, post-1970, which has been characterized by tight budgets, an aging professorate, and tight job markets. Interested people or observers of higher education can also find out about governance (chapter 5), the role of research (chapter 6), indirect costs in funding (chapter 6), and academic tenure (chapter 5).

What are the duties?

Kennedy writes that “much of academic duty resolves itself into a set of obligations that professors owe to others: to their undergraduate students, to the more advanced scholars they train, to their colleagues, to the institutions with which they are affiliated, and to the larger society.” He develops these duties in chapters entitled “To Teach,” “To Mentor,” “To Serve the University,” “To Discover,” “To Publish,” “To Tell the Truth,” “To Reach Beyond the Walls,” and “To Change.”

“Responsibility to students is at the very core of the university’s mission and of the faculty’s academic duty,” Kennedy writes. Yet the public is beginning to question the university’s commitment to this mission, and many faculty are unprepared for or unclear about their obligations to students. Although students expect faculty to be engaged in teaching, faculty often focus more on scholarly endeavors.

Much of the blame for this situation can be found in the nature of graduate student training. In research universities, where faculty throughout higher education are trained, students hone their skills in research in specialized fields. Then, as newly appointed faculty members, they quickly learn that their primary focus must be on research and publication in order to secure tenure. This intense focus on research, frequently lasting for more than 10 years, makes it unlikely that faculty are suddenly going to change their orientation toward undergraduate teaching, mentoring, and advising.

Except for setting teaching loads, many institutions say little about what faculty members owe their students. “The very fact that ‘professional responsibility’ is taught to everyone in the university except those headed for the academic profession is a powerful message in itself,” Kennedy writes. And the expectations of citizenship are set very low. Until the reward system in terms of tenure, promotion, and salary increments recognizes teaching more fully, the incentive to maintain the status quo will be strong.

Teaching values

In examining the important and controversial question of what to teach, Kennedy focuses in particular on criticism that universities fail to teach values. He asserts that values are important, referring, as I understand him, to basic democratic values such as respect for persons, liberty, equality, justice for all, and fairness. He makes two important points. First, he distinguishes between values and conduct. Referring to a statement made by William Bennett about getting drugs off campus, Kennedy argues that such efforts concern the regulation of conduct, not values. Correct. Second, he stresses the importance of students encountering different traditions and modes of reasoning as the basis for forming their own values. (Elsewhere in the book, he also emphasizes the importance of teaching critical thinking and analysis, a position with which most academics feel comfortable.)

Space does not permit comment on each of the duties addressed by Kennedy, but I will focus on two that struck me as having special significance. The chapter in which Kennedy’s passionate concern is most evident is “To Tell the Truth.” Returning to the theme of the university and public mistrust, he says that “higher education’s fall from grace in the past decade” has resulted partially from research misconduct. The resultant media attention, congressional hearings, and personal attacks within the academic community have caused severe damage.

Kennedy reviews some well-known cases, including those of Robert Gallo, Mikulas Popovic, and David Baltimore, and argues that the academic community to date has failed to deal well with the research misconduct issue. Scientists have been either too tolerant or silent in the face of misconduct or careless in their analysis and judgments, he believes. In turn, universities, with their too private and too nonadversarial internal processes, have erred in two directions: They have been “overly protective of [their] own faculty . . . or overly responsive to external cries for a scalp.” Government investigations, aided by panels of scientists, and prosecution efforts have been ineffective as well, he says.

The upshot is that careers and reputations have been badly and unfairly damaged. Redress of these wrongs has come too late and has often been inadequate. Kennedy suggests that appropriate procedures will have to evolve. Surprisingly, he recommends-contrary to almost all university grievance procedures-early participation by legal counsel and an opportunity to challenge witnesses.

Another chapter that deserves mention is “To Reach Beyond the Walls,” in which Kennedy promotes technology transfer as the newest academic duty. Fulfilling this duty, however, has created some complex conflict of interest problems regarding patenting, limits on faculty obligations to their institutions, and appropriate compensation levels. Kennedy’s discussion of the issues involved usefully demonstrates how new duties raise new problems. This theme is picked up in the final chapter on the duty to change.

Minor flaws

Although Academic Duty is a timely and thoughtful commentary on the current state of higher education, it doesn’t sufficiently address three areas. First, the book is primarily focused on research universities. Although Kennedy tries to include a discussion of liberal arts colleges, state colleges, and community colleges, his analysis is understandably based on his experiences during almost four decades at Stanford. The emphasis on classroom teaching and mentoring, the concept of faculty loads, and the basis for tenure decisions are substantially different in many of the nonresearch-based institutions. Accordingly, faculty in those institutions respond to different expectations and reward systems.

Second, the book is too heavily weighted toward science, Kennedy’s field of work. Yet there are important differences between science and the humanities and social sciences that affect how graduates think about undergraduate teaching. For example, many science graduate students are financially supported by research grants; students in the humanities and many of the social sciences, by teaching assistantships. As a result, while science students are working as lab assistants, other graduate students are assisting in and teaching undergraduate courses. In the best assistantship arrangements, the beginning graduate student works with a professor in an apprentice relationship, learns how to mentor in discussion sessions and while providing guidance on term paper development, and anticipates that teaching will be a major part of his or her professional responsibilities. Indeed, many humanities students find the teaching and mentoring experiences much more rewarding than research and thus decide to focus their careers on teaching.

Finally, Kennedy’s discussion of academic freedom and academic duty as counterparts is a stretch. He correctly points out that “academic freedom refers to the insulation of professors and their institutions from political interference.” He adds, again correctly, that there is too much talk about academic freedom and not enough about academic duty. But I disagree with his suggestion that faculty have neglected their academic duties because of the focus on academic freedom. Although many of the duties that he enunciates are “vague and obscure,” I think that the reasons have little to do with claims of academic freedom. Kennedy acknowledges that the focus on research at the expense of teaching is rooted in the nature of graduate training, not academic freedom. A different set of problems are raised about mentoring, but again they are not based on academic freedom. The problems cited in serving the university and publishing also have many sources other than academic freedom.

Academic Duty can profitably be read by people both inside and outside of the academy. The author knows educational institutions, and, from his rich experience as president of Stanford, he engages the reader in a critical discussion about our obligations to both students and society.

Unleashing Innovation in Electricity Generation

This nation’s electric power industry is undergoing profound change. Just when lawmakers are replacing regulated monopolies with competitive entrepreneurs, a new generation of highly efficient, low-emission, modular power technologies is coming of age. Yet surprisingly little policy discussion, either in the states or in Washington, has focused on how to restructure this giant industry in ways that spur technological innovations and productivity throughout the economy.

Sheltered from competitive forces, the fossil-fuel efficiency of electric utilities is lower today than in 1963. Regulated monopolies have had no incentive to take advantage of technological advances that have produced electric generating systems that achieve efficiencies approaching 60 percent, or as much as 90 percent when waste heat is recovered. As a result, traditional power companies burn twice as much fuel (and produce twice as much pollution) as necessary.

Developing an electricity-generating industry that thrives on innovation will require much more than simply increasing R&D expenditures. Government programs and futuristic technologies are not the answer. Rather, progress will come when the barriers to competition are removed and entrepreneurial companies are freed to recreate the electricity system along market-driven lines.

Utility restructuring, if done this way, can unleash competitive forces that will disseminate state-of-the-art electric systems, foster technological innovations, double the U.S. electric system’s efficiency, cut the generation of pollutants and greenhouse gases, enhance productivity and economic development, spawn a multibillion-dollar export industry, and reduce consumer costs. But helping this new electrical world emerge means overcoming numerous legal, regulatory, and perceptual barriers.

An industry in flux

With assets exceeding $600 billion and annual sales above $210 billion, electric utilities are this nation’s largest industry-roughly twice the size of telecommunications and almost 30 percent larger than the U.S.-based manufacturers of automobiles and trucks. The pending changes affecting this giant industry will have a profound impact on this nation’s economy.

Rapid change and innovation marked the industry’s founding almost a century ago. Thomas Edison, William Sawyer, William Stanley, Frank Sprague, Nikola Tesla, and George Westinghouse competed with an array of new technologies. Each struggled to perfect dynamos that generated power; transformers and lines that delivered it; and incandescent light bulbs, railways, elevators, and appliances that used this versatile energy source.

Their competition sparked a technological and business revolution in the late 19th century. But this early competition created chaos as well as opportunity. Unique electrical arrangements conflicted with one another. More than 20 different systems operated in Philadelphia alone. A customer moving across the street often found that his electrical appliances no longer worked.

To ensure order and to protect themselves from “ruinous competition,” executives initially tried to fix prices and production levels among themselves, but the Sherman Antitrust Act of 1890 rendered such efforts illegal. The more effective step, led by J. P. Morgan and other bankers, was to merge and consolidate.

Within the next few decades, the electricity business changed dramatically. On the engineering front, larger and more efficient generators were built, a new filament constructed of tungsten produced an incandescent lamp that was preferable to a gas flame, and long-distance transmission lines sent power over great distances. As the cost of a kilowatt-hour from a central power station dropped from 22 cents in 1892 to only 7 cents three decades later, electricity became a necessity of life.

On the business front, electric companies became integrated monopolies, generating, transmitting, and distributing electricity to consumers in their exclusive service territories. For some 60 years, electric utilities provided reliable power in exchange for guaranteed government-sanctioned returns on their investments.

Mandating retail competition will not by itself remove the many barriers to innovation, efficiency, and productivity.

Recent policy and technological changes, however, are enabling entrepreneurs to generate power below the average price, ending the notion that this industry is a natural monopoly. These small-scale electricity generators are introducing competition into the electric industry for the first time in three generations. Nonutility production almost doubled from 1990 to 1996 and now contributes some 7 percent of U.S. electricity.

Three pieces of federal legislation opened the door to this limited competition. First, the Public Utilities Regulatory Policy Act (PURPA) of 1978 enabled independent generators to sell electricity to regulated utilities. Second, deregulation of the natural gas market lowered the price and increased the availability of gas, a relatively clean fuel. Third, the Energy Policy Act of 1992 (and subsequent rulings by the Federal Energy Regulatory Commission) made it possible for wholesale customers to obtain power from distant utilities.

Noting the development of wholesale competition, some states (Massachusetts, California, Rhode Island, New Hampshire, Pennsylvania, and Illinois) have adopted specific plans to achieve retail competition, and most other states are considering the issue. Several lawmakers have introduced federal legislation to advance such retail competition, to ensure reciprocity among the states, and to restructure the Tennessee Valley Authority and other federal utilities.

To prepare for competition, some utilities have merged, others have sold their generating capacity, and still others have created entrepreneurial unregulated subsidiaries that are selling power in the competitive wholesale market. It appears that integrated utility monopolies are being divided. A likely scenario is that the emerging electricity industry will include competitive electricity-generating firms producing the power, federally regulated companies transmitting it along high-voltage lines, and state-regulated monopolies distributing the electricity to individual consumers and businesses. Federally chartered independent system operators would ensure the grid’s stability and fair competition.

In addition to PURPA and the Energy Policy Act, several other factors are spurring the drive toward competition in the electricity-generating industry. The paramount concern is cost. The Department of Energy (DOE) estimates that restructuring will save U.S. consumers $20 billion per year; some analysts predict a $60 billion annual savings, or $600 per household. Businesses that consume a substantial amount of electricity have been leading advocates for competition among electricity suppliers.

Environmental concerns further the call for innovation-based electric industry restructuring. The bulk of greenhouse gas emissions responsible for climate change-fully one-third of U.S. carbon dioxide emissions-comes from burning fossil fuels in electric generators. Another third comes from production of thermal energy, and roughly half of that amount could be supplied by heat not used by the electric industry. To appreciate the opportunity for improved efficiency, consider that U.S. electric generators throw away more energy than Japan consumes. Unlike the regulated pollutants that can be scrubbed from power plant smokestacks, the only known way to reduce net carbon dioxide emissions is to burn less fossil fuel. Fortunately, modern technologies can cut emissions in half for each unit of energy produced.

Also pushing utility restructuring are the desire of nonutility power producers to sell at retail, protests about regional disparities in price, and failures of the old planning regime. Proponents of the status quo abound, however. Several analysts concentrate on the potential problems associated with change. Some environmentalists, for instance, fear the potential increased output from dirty coal-fired generators and the potential demise of utility-based demand-side management programs that are designed to help customers use electricity more efficiently.

Most of the debate about utility restructuring, however, has focused on just two issues: when to impose retail competition and whom to charge for the “stranded costs” of utility investments, such as expensive nuclear power plants, which will not be viable in a competitive market. The two issues are related because the longer retail competition is postponed, the more time utilities have to recoup their investments. The strategies proposed for dealing with these issues vary dramatically. Utilities argue that current customers that no longer want to buy electricity from them should be forced to pay an “exit fee” to help pay for the stranded costs. Independent power producers maintain that utilities could pay for stranded costs by improving the efficiency of their operations.

Both approaches raise questions. Although high exit fees would retire utility debt, they also would discourage the growth of independent producers. And one cannot state with certainty how much utilities could save through efficiency improvements, though the potential appears to be substantial. For example, utilities could eliminate the need for an army of meter readers trudging from house to house by installing meters that could be read electronically from a central location. Adding computer-controlled systems that constantly adjust combustion mixes in turbines could increase efficiency by as much as 5 percent.

Only the beginning

The arrival of wires early in this century introduced lights, appliances, and machines that lengthened days, reduced backbreaking drudgery, and sparked an industrial revolution. Still, we are only on the threshold of tapping electricity’s potential value. Innovation can improve the efficiency with which electricity is generated and transmitted. It can enable a wealth of new electrotechnical applications within U.S. industries and for export throughout the world. It also can spark an array of new consumer services.

Consider first the potential for vastly improved electricity generators. Efficiencies of natural gas-fired combustion turbines already have risen from 22 percent in the mid-1970s to 60 percent for today’s utility-sized (400 megawatts) combined cycle units that use the steam from a gas turbine’s hot exhaust to drive a second turbine-generator. Simpler and smaller (5 to 15 megawatts) industrial turbines have electrical efficiencies of about 42 percent and system efficiencies above 85 percent when the waste heat is used to produce steam for industrial processes. Small-scale fluid-bed coal burners and wood chip boilers also produce both electricity and heat cleanly and efficiently. Since 1991, production of thin-film photovoltaic cells has increased more than 500 percent, and more efficient motors and new lightweight materials have reduced the costs of wind turbines by 90 percent.

Several other technologies are on the horizon, including fuel cells that produce electricity through intrinsically more efficient (and cleaner) chemical reactions rather than combustion. The first generation of commercial fuel cell units is expected to achieve 55 percent electric efficiency when they appear on the market in 2001; when used to produce both power and heat, the total system efficiencies will approach 90 percent.

Innovation also is possible in the transmission and distribution grid. Insurers, environmental groups, and others have raised concerns about the grid’s stability and reliability, and growing numbers of digital technology users are concerned about power quality. More and longer-distance exchanges of power in an open electricity market could push the limits of our human-operated electricity dispatch system. Very small errors can become magnified and ripple through the system, increasing the risk of overloadings, fires, and transformer explosions. Fortunately, a host of software, hardware, and management technologies are on the horizon. Sophisticated software based on neural networks (a type of self-organizing system in which a computer teaches itself to optimize power transfers) could greatly increase power quality and reduce the risk of overloads. More robust and efficient distribution technologies, such as high-temperature superconducting transformers and wires, could further cut that risk. Several engineers also envision a distributed or dispersed energy system in which information links increasingly substitute for transmission lines, and most electricity is used in efficient “power islands.” Two-way communication and control between generator and customer can dramatically reduce the need for overcapacity. The more intelligent the system, the easier it will be to ensure that electricity takes the shortest and most efficient path to the customer.

Electric equipment depreciation should be standardized and made similar to that of comparable industrial equipment.

Even the near-term possibilities for new consumer services are substantial. “Imagine the elderly and the poor having a fixed energy bill rolled into their mortgage or rent,” suggests Jeffrey Skilling, president of Enron Corp., one of the new entrepreneurial power producers. “Imagine an electric service that could let consumers choose how much of their home power is generated by renewable resources. Imagine a business with offices in 10 states receiving a single monthly bill that consolidates all of its energy costs.” Because power companies already have a wire connection to virtually every home and business, they are exploring their potential to provide a host of other services, including home security, medical alerts, cable television, and high-speed Internet access.

One promising option is onsite electricity production. “In ten years,” predicts Charles Bayless, chairman of Tucson Electric Power, “it will be possible for a 7-Eleven store to install a small ‘black box’ that brings natural gas in and produces heating, cooling, and electricity.”

In addition to avoiding transmission and distribution losses, onsite power generators offer manufacturers and building owners (or, more probably, their energy service companies) the opportunity to optimize their power systems, which would lead to increased efficiency, enhanced productivity, and lowered emissions. A study by the American Council for an Energy Efficient Economy suggests that such gains ripple through the industrial operation, as productivity benefits often exceed energy savings by more than a factor of four.

Mass-produced, small distributed generators could be a viable alternative to large centralized power plants. To illustrate the practicality of this option, engineers point out that Americans currently operate more than 100 million highly reliable self-contained electric generating plants-their cars and trucks. The average automobile power system, which has a capacity of roughly 100 kilowatts, has a per-kilowatt cost that is less than one-tenth the capital expense of a large electric generator.

Improved electric generators will also spark new technologies and systems within U.S. industry. Noting that electrotechnologies already have revolutionized the flow of information, the processing of steel, and the construction of automobiles, the Electric Power Research Institute (EPRI) envisions future applications that offer greater precision and reliability; higher quality, portability and modularity; enhanced speed and control; and “smarter” designs that can be manufactured for miniaturized end-use applications. Innovative electrotechnologies also will dramatically reduce the consumption of raw resources and minimize waste treatment and disposal.

U.S. development of efficient generators and modern electrotechnologies could also open a vast export market. The growth in global population, combined with the rising economic aspirations of the developing countries, should lead to significant electrification throughout the world.

Such benefits are not pie-in-the-sky ramblings by utopian scientists or overenthusiastic salesmen. According to a study by the Brookings Institution and George Mason University, restructuring and the resultant competition have generated cost savings and technological innovations in the natural gas, trucking, railroad, airline, and long-distance telecommunication industries. “In virtually every case,” they concluded, “the economic benefits from deregulation or regulatory reform have exceeded economists’ predictions.”

Consider the competition-sparked innovations in the telecommunications market. Within a relatively short period, consumer options increased from a black rotary phone to cellular, call waiting, voice mail, paging, long-distance commerce, and video conferencing. Similar gains could occur in the electricity industry.

What’s needed, however, is a policy revolution to accompany the emerging technological revolution. Laws and regulations must become innovation-friendly.

MIT meets the regulators

Although modern electric technologies can provide enormous benefits, implementing them is usually problematic, even for a technological supersophisticate such as the Massachusetts Institute of Technology (MIT). In 1985, MIT began to consider generating its own electricity. With its students using PCs, to say nothing of stereos, hair dryers, and toaster ovens, the university faced soaring electricity costs from the local utility, Cambridge Electric Company (CelCo). Many of MIT’s world-class research projects were also vulnerable to a power interruption or even to low-quality power. At the same time, MIT’s steam-powered heating and cooling system, which included 1950s-vintage boilers that burned fuel oil, was a major source of sulfur dioxide, nitrogen oxides, carbon monoxide, and volatile organic compounds.

The university finally settled on a 20-megawatt, natural gas-fired, combined heat and power (CHP) turbine-heat recovery system. The system was to be 18 percent more efficient than generating electricity and steam independently. It was expected to meet 94 percent of MIT’s power, heating, and cooling needs and to cut its annual energy bills by $5.4 million. Even though MIT agreed to pay CelCo $1 million for standby power, the university expected to recoup its investment in 6.9 years.

The federal government should establish a pollution-trading system for all major electricity-related pollutants, including nitrogen oxides and particulates.

MIT’s first major hurdle was getting the environmental permit it needed before construction could begin. Because it retired two 1950s-vintage boilers and relegated the remaining boilers to backup and winter-peaking duty, the CHP system would reduce annual pollutant emissions by 45 percent, an amount equal to reducing auto traffic in Cambridge by 13,000 round trips per day. Despite this substantial emissions savings, plant designers had problems meeting the state’s nitrogen oxide standard. Unfortunately for MIT, the state’s approved technology for meeting this standard, which was designed for power stations more than 10 times larger than MIT’s generator, was expensive and posed a potential health risk because of the need to store large amounts of ammonia in the middle of the campus. MIT appealed to the regional emission-regulating body, performed a sophisticated life-cycle assessment, and showed that its innovative system had lower net emissions than the state-approved technology that vented ammonia.

Although MIT overcame the environmental hurdle and completed construction in September 1995, that same year it became the nation’s first self-generator to be penalized with a stranded-cost charge. The Massachusetts Department of Public Utilities (DPU), looking ahead to state utility restructuring, approved CelCo’s request for a “customer transition charge” of $3,500 a day ($1.3 million a year) for power MIT would not receive. MIT appealed the ruling in federal court, arguing that it already was paying $1 million per year for backup power, that CelCo had known about MIT’s plans for 10 years and could have taken action to compensate, and that the utility’s projected revenue loss was inflated. But the judges ruled that their court did not have jurisdiction. MIT then appealed to the Massachusetts Supreme Judicial Court, which in September 1997 reversed DPU’s approval of the customer transition charge, remanded the case for further proceedings, and stated that no other CelCo ratepayers contemplating self-generation should have to pay similar stranded costs.

Although MIT now has its own generator, which is saving money and reducing pollution, the university’s experience demonstrates the substantial effort required to overcome regulatory and financial barriers. Very few companies that might want to generate their own power have the resources or expertise that MIT needed to overcome the regulatory obstacles. As states and the federal government move to restructure the electric industry, they have an opportunity to remove these obstacles to innovation.

Barrier busting

Lack of innovation within the U.S. electric industry is not due to any mismanagement or lack of planning by utility executives. Those executives simply followed the obsolete rules of monopoly regulation. Reforming those obsolete rules will give industry leaders the incentive to dramatically increase the efficiency of electricity generation and transmission.

Part of the problem is perceptual. More than two generations have come to accept the notion that electricity is best produced at distant generators. Few question the traditional system in which centralized power plants throw away much of their heat, while more fuel is burned elsewhere to produce that same thermal energy. Few appreciate the fact that improved small-engine and turbine technology, as well as the widespread availability of natural gas, have made it more efficient and economical to build dispersed power plants that provide both heat and power to consumers and that avoid transmission and distribution losses. Because utilities have been protected from market discipline for more than 60 years, few challenge the widespread assumption that the United States has already achieved maximum possible efficiency.

Mandating retail competition will not by itself remove the many barriers to innovation, efficiency, and productivity, as the recent history of monopoly deregulation shows. Federal legislation has deregulated the telephone industry, but some of the regional Bell operating companies have been able to preserve regulations that impede the entrance of new competitors into local telephone markets. The same is likely to be true in the electricity market, particularly if state and federal initiatives do not address potential regulatory, financial, and environmental barriers adequately.

Regulatory barriers

Unreasonable requirements for switching electricity suppliers. Most states adopting retail competition allow today’s utilities to recover most of their investments in power plants and transmission lines that will not survive in a competitive market. These so-called stranded costs are being recovered either through a fee on future electricity sales or a charge to those individuals or businesses exiting the utility’s system. High exit fees, however, would be a significant barrier to independent or onsite generators. In the wake of the MIT case, Massachusetts banned exit fees for firms switching to onsite generators with an efficiency of at least 50 percent. Other states should avoid exit fees that discourage the deployment of energy-efficient and pollution-reducing technologies. They might introduce a sliding scale that exempts new technologies by an amount proportional to their increased efficiency and decreased emissions of nitrogen oxide and sulfur dioxide. States should also resist the efforts of dominant power companies to impose lengthy notice periods before consumers can switch to a different electricity supplier.

Unreasonable requirements for selling to the grid. Dominant power companies also could limit competition by imposing obsolete and prohibitively expensive interconnection standards and metering requirements that have no relation to safety. To prevent that practice, the federal government should develop and regularly update national standards governing electricity interconnections and metering for all electric customers.

Requirements that discourage energy self-sufficiency. Many consumers now have the ability to cost-effectively generate some of their own electricity. However, large electric suppliers could block these potential competitors by penalizing customers who purchase less than all of their electricity from them or by charging excessive rates for backup or supplemental power. In order for all consumers to be able to choose their supplier of power (including backup and supplemental power), tariffs for the use of the distribution grid must be fair and nondiscriminatory. In addition, although some companies can use waste fuel from one plant to generate electricity for several of their other facilities, obsolete prohibitions on private construction of electric wires and other energy infrastructure often prevent such “industrial ecology.” States should follow Colorado’s lead and permit any firm that supplies energy to its own branches or units to construct electric wires and natural gas pipes.

Financial barriers

Tax policies that retard innovation. Depreciation schedules for electricity-generating equipment that are, on average, three times longer than those for similar-sized manufacturing equipment discourage innovation in the electric industry. Such depreciation schedules made sense when a utility monopoly wanted to operate its facilities, whatever the efficiency, for 30 or more years. They make no sense in the emerging competitive market, when rapid turnover of the capital stock will spur efficiency and technological innovation. Electric equipment depreciation therefore should be standardized and made similar to that of comparable industrial equipment.

Monopoly regulation that encourages the inefficient. Because they were able to obtain a return on any investment, utilities had an incentive to build large, expensive, and site-constructed power plants. They also had no reason to retire those plants, even when new generators were more economical, efficient, and environmentally sound. Moreover, monopoly regulation provided no reward to the utilities for energy-efficiency savings. What are needed instead are state and federal actions that advance competitive markets, which will impose incentives to trim fuel use and make better use of the waste heat produced by electric generation.

Environmental barriers

Unrecognized emissions reductions. U.S. environmental regulations are a classic case of a desire for the perfect-zero emissions-being the adversary of the good-lower emissions achieved through higher efficiency. Highly efficient new generators, for instance, are penalized by the Environmental Protection Agency’s (EPA’s) implementation of the Clean Air Act, which fails to recognize that even though a new generator will increase emissions at that site, it will eliminate the need to generate electricity at a facility with a higher rate of emissions, so that the net effect is a significant drop in emissions for the same amount of power generated. In order to reduce emissions overall and encourage competition, the EPA, in collaboration with the states, should instead develop output-based standards that set pollution allowances per unit of heat and electricity. The federal government should measure the life-cycle emissions of all electric-generation technologies on a regular basis. EPA or the states should also provide emissions credits to onsite generators that displace pollutants by producing power more cleanly than does the electric utility.

Subsidy of “grandfathered” power plants. The Clean Air Act of 1970 exempted all existing electric generating plants from the stringent new rules that would shut down a new generator that has excess emissions, even though an old plant producing 20 times as much emissions would be allowed to operate. This perverse policy puts new technologies at a disadvantage, and some analysts worry that deregulation will enable the grandfathered plants, which face reduced environmental control costs, to generate more power and more pollution. Others argue that true competition, in which electric-generating companies are forced to cut costs dramatically, will make inefficient grandfathered plants far less attractive. The bottom line is that the old plants need to be replaced, and federal, state, and local governments should adopt innovative financing programs and streamline the permit process in order to speed the introduction of new facilities.

Lack of a market approach for all emissions. As it did with sulfur dioxide, the federal government should establish a pollution-trading system for all major electricity-related pollutants, including nitrogen oxides and particulates. The system should allow flexibility for emissions/efficiency tradeoffs. It should also gradually reduce the pollution allowances for all traded pollutants on a schedule that is made public well in advance.

Reliance on end-of-pipe environmental controls. One reason why industries neither generate electricity themselves nor use the waste heat for process steam is that current environmental regulations rely on end-of-pipe and top-of-smokestack controls. Such cleansers are expensive and increase electricity use dramatically. A more efficient solution would be for EPA and/or the states to allow process industries to trade electricity-hogging end-of-pipe environmental control technologies for increased efficiency with its accompanying reduction in pollution.

The innovation alternative

The United States is on the verge of the greatest explosion in power system innovation ever seen. The benefits of an innovation-based restructuring strategy for the electric industry will be widespread. Experience elsewhere in the world suggests that ending monopoly regulation will save money for all classes of consumers. In the four years since Australia began its utility deregulation, wholesale electricity prices have fallen 32 percent in real terms. Restructuring will also reduce pollution and improve air quality. The United Kingdom in 1989 began to deregulate electricity generation and sale and to shift from coal to natural gas; six years later, carbon dioxide emissions from power generation had fallen 39 percent and nitrogen oxides 51 percent.

Timing, however, is critical if the United States is to capture such benefits. In the next several years, much of the United States’ aging electrical, mechanical, and thermal infrastructure will need to be replaced. For example, if U.S. industry continues to encounter barriers to replacing industrial boilers with efficient generators such as combined heat and power systems, the country will have lost an opportunity for a massive increase in industrial efficiency.

Maintaining the status quo is no longer an option, in part because the current monopoly-based industry structure has forced Americans to spend far more than they should on outmoded and polluting energy services. If federal and state lawmakers can restructure the electric industry cooperatively, based on market efficiency and principles of consumer choice, they will bring about immense benefits for both the economy and the environment.

Scorched-Earth Fishing

The economic and social consequences of overfishing, along with the indiscriminate killing of other marine animals and the loss of coastal habitats, have stimulated media coverage of problems in the oceans. Attention to marine habitat destruction tends to focus on wetland loss, agricultural runoff, dams, and other onshore activities that are visible and easily photographed. In tropical regions, fishing with coral reef-destroying dynamite or cyanide has been in the news, the latter making it to the front page of the New York Times.

Yet a little-known but pervasive kind of fishing ravages far more marine habitats than any of these more noticeable activities. Bottom trawls-large bag-shaped nets towed over the sea floor-account for more of the world’s catch of fish, shrimp, squid, and other marine animals than any other fishing method. But trawling also disturbs the sea floor more than any other human activity, with increasingly devastating consequences for the world’s fish populations.

Trawling is analagous to strip mining or clear-cutting- except that trawling affects areas that are larger by orders of magnitude.

Trawl nets can be pulled either through mid-water (for catching fish such as herring) or along the bottom with a weighted net (for cod, flounder, or shrimp). In the latter method, a pair of heavy planers called “doors” or a rigid steel beam keeps the mouth of the net stretched open as the boat tows it along, and a weighted line or chain across the bottom of the net’s mouth keeps it on the seabed. Often this “tickler” chain frightens fish or shrimp into rising off the sea bottom; they then fall back into the moving net. Scallopers employ a modified trawl called a dredge, which is a chain bag that plows through the bottom, straining sediment through the mesh while retaining scallops and some other animals.

Until just a few years ago, trawlers were unable to work on rough bottom habitats or those strewn with rubble or boulders without risking hanging up and losing their nets and gear. For animal and plant communities that live on the sea bottom, these areas were thus de facto sanctuaries. Nowadays, every kind of seabed-silt, sand, clay, gravel, cobble, boulder, rock reef, worm reef, mussel bed, seagrass flat, sponge bottom, or coral reef-is vulnerable to trawling. For fishing rough terrain or areas with coral heads, trawlers have since the mid-1980s employed “rockhopper” nets equipped with heavy wheels that roll over obstructions. In addition to the biological problems rockhoppers create, this fishing gear also displaces commercial hook-and-line and trap fishers who formerly worked such sites without degrading the habitat. Wherever they fish and whatever they are catching, bottom trawls churn the upper few inches of the seabed, gouging the bottom and dislodging rocks, shells, and other structures and the creatures that live there.

Ravaging the seabed

Much of the world’s seabed is encrusted and honeycombed with structures built by living things. Trawls crush, kill, expose to enemies, and remove these sources of nourishment and hiding places, making life difficult and dangerous for young fish and lowering the quality of the habitat and its ability to produce abundant fish populations.

Bottom trawling is akin to harvesting corn with bulldozers that scoop up topsoil and cornstalks along with the ears. Trawling commonly affects the top two inches of sediment, which are the habitat of most of the animals that provide shelter and food for the fish, shrimp, and other animals that humans eat. At one Gulf of Maine site that was surveyed before trawling and again after rockhopper gear was used, researchers noted profound changes. Trawling had eliminated much of the mud surface of the site, along with extensive colonies of sponges and other surface-growing organisms. Rocks and boulders had been moved and overturned.

It may be hard to get excited about vanished sponges and overturned rocks. But for the fishing industry-like that in New England, which has lost thousands of jobs and hundreds of millions of dollars in recent years and is suffering the resultant social consequences-habitat changes caused by fishing gear are significant. The simplification of habitat caused by trawling makes the young fish of commercially important species more vulnerable to natural predation. In lab studies of the effects of bottom type on fish predation, the presence of cobbles, as opposed to open sand or gravel-pebble bottoms, extended the time it took for a predatory fish to capture a young cod and allowed more juvenile cod to escape predation.

But virtually the entire Gulf of Maine is raked by nets annually, and New England’s celebrated Georges Bank, the once-premier and now-exhausted fishing ground, is swept three to four times per year. Parts of the North Sea are hit seven times, and along Australia’s Queensland coast, shrimp trawlers plow along the bottom up to eight times annually. A single pass kills 5 to 20 percent of the seafloor animals, so a year’s shrimping can wholly deplete the bottom communities.

More data needed

Considering how commonplace trawling has become in the world’s seas, researchers have completed astonishingly few studies. For example, virtually nothing is known about shrimp trawling’s effects on the Gulf of Mexico’s seabed, although this is one of the world’s most heavily trawled areas. The effects on fish populations and the fishing industry, although probably significant, have been difficult to quantify because there are few unaltered reference sites. But the studies available suggest that the large increases in bottom fishing from the 1960s through the early 1990s are likely to have reduced the productivity of seafloor habitats substantially, exacerbating depletion from overfishing.

Peter Auster and his colleagues at the University of Connecticut’s National Undersea Research Center have found that recent levels of fishing effort on the continental shelves by trawl and dredge gear “may have had profound impacts on the early life history in general, and survivorship in particular, of a variety of species.” At three New England sites, which scientists have studied either within and adjacent to areas closed to bottom trawls or before and after initial impact, trawls significantly reduced cover for juvenile fishes and the bottom community. In northwestern Australia, the proportion of high-value snappers, emperors, and groupers-species that congregate around sponge and soft-coral communities-dropped from about 60 percent of the catch before trawling to 15 percent thereafter, whereas less valuable fish associated with sand bottoms became more abundant.

In temperate areas, biological structures are much more subtle than the spectacular coral reefs of the tropics. A variety of animals, including the young of commercially important fish, mollusks, and crustaceans, rely on cover afforded by shells piled in the troughs of shallow sand ridges caused by storm wave action, depressions created by crabs and lobsters, and the havens provided by worm burrows, amphipod tubes, anemones, sea cucumbers, and small mosslike organisms such as bryozoans and sponges.

Some of these associations are specific: postlarval silver hake gather in the cover of amphipod tubes, young redfish associate with cerrianid tubes, and small squid and scup shelter in depressions made by skates. Newly settled juvenile cod defend territories around a shelter. Studies off Nova Scotia indicate that the survival of juvenile cod is higher in more complex habitats, which offer more shelter from predators. In another study, the density of small shrimp was 13 per square meter outside trawl drag paths and zero in a scallop dredge path.

A general misperception is that small invertebrate marine bottom dwellers are highly fecund and reproduce by means of drifting larvae that can recolonize large areas quickly. In truth, key creatures of the bottom community can disperse over only short distances. Offspring must find suitable habitat in the immediate vicinity of their parents or perish. The seafloor structures that juvenile fish rely on are often small in scale and are easily dispersed or eliminated by bottom trawls. Not only is the cover obliterated, but the organisms that create it are often killed or scattered by the trawls.

Les Watling of the University of Maine (who has studied the effect of mobile fishing gear in situ) and Marine Conservation Biology Institute director Elliott Norse have shown that trawling is analogous to strip mining or clear-cutting-except that trawling affects territories that are larger by orders of magnitude. An area equal to that of all the world’s continental shelves is hit by trawls every 24 months, a rate of habitat alteration variously calculated at between 15 and 150 times that of global deforestation through clear-cutting.

A multinational group of scientists at a workshop Norse convened in 1996

at the University of Maine concluded that bottom trawling is the most important human source of physical disturbance on the world’s continental shelves. Indeed, so few of the shelves are unscarred by trawling that studies comparing trawled and untrawled areas are often difficult to design. The lack of research contributes to the lack of awareness, and this could be one reason why trawling is permitted even in U.S. national marine sanctuaries.

Trawling is not uniformly bad for all species or all bottom habitats. In fact, just as a few species do better in clear-cuts, some marine species do better in trawled than in undisturbed habitats. A flatfish called the dab, for instance, benefits because trawling eliminates its predators and competitors and the trawls’ wakes provide lots of food.

But most species are not helped by trawling, and marine communities can be seriously damaged, sometimes for many decades. Communities that live in shallow sandy habitats subject to storms or natural traumas such as ice scouring tend to be resilient and resist physical disturbances. But deeper communities that seldom experience natural disturbances are more vulnerable and less equipped to recover quickly from trawling. In Watling and Norse’s global review of studies covering various habitats and depths, none showed general increases in species after bottom trawling, one showed that some species increased while others decreased, and four indicated little significant change. But 18 showed serious negative effects, and many of these were done in relatively shallow areas, which generally tend to be more resilient than deeper areas.

Comparing the damage caused by bottom trawling to the clear-cutting of forests is not unreasonable in light of the fact that some bottom organisms providing food or shelter may require extended undisturbed periods to recover. Sponges on New England’s sea floor can be 50 years old. Watling has said that if trawling stopped today, some areas could recover substantially within months, but certain bottom communities may need as much as a century.

Reducing the damage

Humanity’s focus on extracting food from the oceans has effectively blinded fishery managers to the nourishment and shelter that these fish themselves require. If attention were paid instead to conservation of the living diversity on the seabed, fisheries would benefit automatically because the ecosystem’s productivity potential and inherent output and service capacity would remain high. Actions that would simultaneously safeguard the fishing industry as well as the seabed need to be taken now. These measures would include:

  1. No-take replenishment zones where fishing is prohibited. This would help create healthy habitats supplying adjacent areas with catchable fish. Such designations are increasingly common around the world, particularly in certain areas of the tropics, and benefits often appear within a few years. In New England, fish populations are still very low, but they are increasing in areas that the regional fishery management councils and National Marine Fisheries Service have temporarily closed to fishing after the collapse of cod and other important fish populations. The agencies should make some of these closings permanent to permit the areas’ replenishment and allow research on their recovery rates.
  2. Fixed-gear-only zones where trawls and other mobile gear are banned in favor of stationary fishing gear, such as traps or hooks and lines, that doesn’t destroy habitat. New Zealand and Australia have closed areas to bottom trawls. So have some U.S. states, although these closures are usually attempts to protect fish in especially vulnerable areas or to reduce conflicts between trawls and other fishers, not to protect habitat. Temporary closures in federal waters, such as those in New England, should in some cases be made permanent for trawls but opened to relatively benign stationary gear. What gear is permitted should depend on bottom type, with mobile gear allowed more on shallow sandy bottoms that are relatively resistant to disturbance but barred from harder, higher-relief, and deeper bottoms where trawler damage is much more serious.
  3. Incentives for development of fishing gear that does not degrade the very habitat on which the fishing communities ultimately depend. Fish and fisheries have been hurt by perverse subsidies that have encouraged overfishing, overcapacity of fishing boats, and degradation of habitat and marine ecosystems. Intelligently designed financial incentives for encouraging new and more benign technology could tap the inherent inventiveness of fishers in constructive ways.

Patented Genes: An Ethical Appraisal

On May 18, 1995, about 200 religious leaders representing 80 faiths gathered in Washington, D.C., to call for a moratorium on the patenting of genes and genetically engineered creatures. In their “Joint Appeal Against Human and Animal Patenting,” the group stated: “We, the undersigned religious leaders, oppose the patenting of human and animal life forms. We are disturbed by the U.S. Patent Office’s recent decision to patent body parts and several genetically engineered animals. We believe that humans and animals are creations of God, not humans, and as such should not be patented as human inventions.”

Religious leaders, such as Ted Peters of the Center for Theology and Natural Sciences, argue that “patent policy should maintain the distinction between discovery and invention, between what already exists in nature and what human ingenuity creates. The intricacies of nature . . . ought not to be patentable.” Remarks such as this worry the biotech industry, which has come to expect as a result of decisions over two decades by the U.S. Patent and Trademark Office (PTO) and by the courts that genes, cells, and multicellular animals are eligible for patent protection. The industry is concerned because religious leaders have considerable influence and because their point of view is consistent with the longtime legal precedent that products of nature are not patentable.

Representatives of the biotech industry argue that their religious critics fail to understand the purpose of patent law. According to the industry view, patents create temporary legal monopolies to encourage useful advances in knowledge; they have no moral or theological implications. As Biotechnology Industry Organization president Carl Feldbaum noted: “A patent on a gene does not confer ownership of that gene to the patent holder. It only provides temporary legal protections against attempts by other parties to commercialize the patent holder’s discovery or invention.” Lisa Raines, vice president of the Genzyme Corporation, summed up the industry view: “The religious leaders don’t understand perhaps what our goals are. Our goals are not to play God; they are to play doctor.”

The differences between the two groups are not irreconcilable. The religious leaders are not opposed to biotechnology, and the industry has no interest in being declared the Creator of life. The path to common ground must begin with an understanding of the two purposes of patent law.

Double vision

Patent law traditionally has served two distinct purposes. First, it secures to inventors what one might call a natural property right to their inventions. “Justice gives every man a title to the product of his honest industry,” wrote John Locke in his Two Treatises on Civil Government. If invention is an example of industry, then patent law recognizes a preexisting moral right of inventors to own the products they devise, just as copyright recognizes a similar moral right of authors. Religious leaders, who believe that God is the author of nature (even if evolution may have entered the divine plan), take umbrage, therefore, when mortals claim to own what was produced by divine intelligence.

Second, patents serve the utilitarian purpose of encouraging technological progress by offering incentives-temporary commercial monopolies-for useful innovations. One could argue, as the biotech industry does, that these temporary monopolies are not intended to recognize individual genius but to encourage investments that are beneficial to society as a whole. Gene patents, if construed solely as temporary commercial monopolies, may make no moral claims about the provenance or authorship of life.

What industry wants is not to upstage the Creator but to enjoy a legal regime that protects and encourages investment.

Legal practice in the past has avoided a direct conflict between these two purposes of patent policy-one moral, the other instrumental-in part by regarding products of nature as unpatentable because they are not “novel.” For example, an appeals court in 1928 held that the General Electric Company could not patent pure tungsten but only its method for purifying it, because tungsten is not an invention but a “product of nature.” In 1948, the Supreme Court in Funk Brothers Seed Company v. Kalo Inoculant invalidated a patent on a mixture of bacteria that did not occur together in nature. The Court stated that the mere combination of bacterial strains found separately in nature did not constitute “an invention or discovery within the meaning of the patent statutes.” The Court wrote, “Patents cannot issue for the discovery of the phenomena of nature. . . . [They] are part of the storehouse of knowledge of all men. They are manifestations of laws of nature, free to all men and reserved exclusively to none.”

The moral and instrumental purposes of patent law came into conflict earlier in this century when plant breeders, such as Luther Burbank, sought to control the commercial rights to the new varieties they produced. If patents served solely an instrumental purpose to encourage by rewarding useful labor and investment, one might say that patents should issue on the products of the breeder’s art. Yet both the PTO and the courts denied patentability to the mere repackaging of genetic material found in nature because, as the Supreme Court said later about a hybridized bacterium, even if it “may have been the product of skill, it certainly was not the product of invention.”

To put this distinction in Aristotelian terms, breeders provided the efficient cause (that is, the tools or labor needed to bring hybrids into being) but not the formal cause (that is, the design or structure of these varieties). Plant breeders could deposit samples of a hybrid with the patent office, but they could not describe the design or plan by which others could construct a plant variety from simpler materials. The patent statute requires, however, applicants to describe the design “in such full, clear, concise and exact terms as to enable any person skilled in the art to which it pertains . . . to make and use the same.” A breeder could do little more to specify the structure of a new variety than to refer to its ancestor plants and to the methods used to produce it. This would represent no advance in plant science; it would tell others only what they already understood.

Confronted with the inapplicability of intellectual property law to new varieties of plants, Congress enacted the Plant Patent Act of 1930 and the Plant Variety Protection Act of 1970, which protect new varieties against unauthorized asexual and sexual reproduction, respectively. Breeders were required to deposit samples in lieu of providing a description of how to make the plant. Congress thus created commercial monopolies that implied nothing about invention and therefore nothing about moral or intellectual property rights. Accordingly, religious leaders had no reason to object to these laws.

The Court changes everything

This legal understanding concerning products of nature lasted until 1980, when the Supreme Court, by a 5-4 majority, decided in Diamond v. Chakrabarty, that Chakrabarty, a biologist, could patent hybridized bacteria because “his discovery is not nature’s handiwork, but his own.” The court did not intend to reverse the long tradition of decisions that held products of nature not to be patentable. The majority opinion reiterated that “a new mineral discovered in the earth or a new plant discovered in the wild is not patentable subject matter.” The majority apparently believed that the microorganisms Chakrabarty wished to patent were not naturally occurring but resulted from “human ingenuity and research.” The plaintiffs’ lawyers failed to disabuse the court of this mistaken impression because they focused on the potential hazards of engineered organisms, a matter (as the Court held) that is irrelevant to their patentability.

Although Chakrabarty’s patent disclosure, in its first sentence, claims that the microorganisms were “developed by the application of genetic engineering techniques,” Chakrabarty had simply cultured different strains of bacteria together in the belief that they would exchange genetic material in a laboratory “soup” just as they do in nature. Chakrabarty himself was amazed at the Court’s decision, since he had used commonplace methods that also occur naturally to exchange genetic material between bacteria. “I simply shuffled genes, changing bacteria that already existed,” Chakrabarty told People magazine. “It’s like teaching your pet cat a few new tricks.”

The Chakrabarty decision emboldened the biotechnology industry to argue that patents should issue on genes, proteins, and other materials that had commercial value. In congressional hearings on the Biotechnology Competitiveness Act (which passed in the Senate in 1988), witnesses testified that the United States was locked in a “global race against time to assure our eminence in biotechnology”; a race in which the PTO had an important role to play.

While Congress was debating the issue, the PTO was already implementing a major change in policy. It began routinely issuing patents on products of nature (or functional equivalents), including genes, gene fragments and sequences, cell lines, human proteins, and other naturally occurring compounds. For example, in 1987, Genetics Institute, Inc., received a patent on human erythropoietin (EPO), a protein consisting of 165 amino acids that stimulate the production of red blood cells. Genetics Institute did not claim in any sense to have invented EPO; it had extracted a tiny amount of the naturally occurring polymer from thousands of gallons of urine. Similarly, Scripps Clinic patented a clotting agent, human factor VIII:C, a sample of which it had extracted from human blood.

Harvard University acquired a patent on glycoprotein 120 antigen (GP120), a naturally occurring protein on the coat of the human immunodeficiency virus. A human T cell antigen receptor has also been patented. Firms have received patents for hundreds of genes and gene fragments; they have applied for patents for thousands more. With few exceptions, the products of nature for which patents issued were not changed, redesigned, or improved to make them more useful. Indeed, the utility of these proteins, genes, and cells typically depends on their functional equivalence with naturally occurring substances. Organisms produced by conventional breeding techniques also now routinely receive conventional patents, even though they may exhibit no more inventive conception or design than those Burbank bred. The distinction between products of skill and of invention, which was once sufficient to keep breeders from obtaining ordinary patents, no longer matters in PTO policy. Invention is no longer required; utility is everything.

The search for common ground

Opponents of patents on genetic materials generally support the progress of biotechnology. At a press conference, religious leaders critical of patenting “the intricacies of nature” emphasized that they did not object to genetic engineering; indeed, they applauded the work of the biotech industry. Bishop Kenneth Carder of the United Meth-odist Church said, ”What we are objecting to is the ownership of the gene, not the pro-cess by which it is used.” In a speech delivered to the Pontifical Acad-emy of Sciences in 1994, Pope John Paul II hailed progress in genetic science and tech-nology. Nevertheless, the Pope said: “We rejoice that numerous researchers have refused to allow discoveries made about the genome to be patented. Since the human body is not an object that can be disposed of at will, the results of research should be made available to the whole scientific community and cannot be the property of a small group.”

Industry representatives and others who support gene patenting may respond to their religious critics in either of two ways. First, they may reply that replicated complementary DNA (cDNA) sequences, transgenic plants and animals, purified proteins, and other products of biotech-nology would not exist without human intervention in nature. Hence they are novel inventions, not identical to God’s creations. Second, industry representatives may claim that the distinction between “invention” and “discovery” is no longer relevant to patent policy, if it ever was. They may concede, then, that genetic materials are products of nature but argue that these discoveries are patentable compositions of matter nonetheless.

Consider the assertion that genes, gene sequences, and living things, if they are at all altered by human agency, are novel organisms and therefore not products of nature. This defense of gene patenting would encounter several difficulties. First, patents have issued on completely unaltered biological materials such as GP120. Second, the differ-ences between the patented and the natural substance, where there are any, are unlikely to affect its utility. Rather, the value or usefulness of the biological product often depends on its functional identity to or equivalence with the natural product and not on any difference that can be ascribed to human design, ingenuity, or invention. Third, the techniques such as cDNA replication and the immortalization of cell lines by which biological material is gathered and reproduced have become routine and obvious. The result of employing these techniques, therefore, might be the product of skill, but not of invention.

Proponents of gene patenting might concede that genes, proteins, and other patented materials are indeed products of nature. They may argue with Carl Feldbaum that this concession is irrelevant, however, because patents “confer commercial rights, not ownership.” From this perspective, which patent lawyers generally endorse, patenting makes no moral claim to invention, design, or authorship but only creates a legal monopoly to serve commercial purposes. Ownership remains with God. Accordingly, gene patents carry no greater moral implications than do the temporary monopolies plant breeders enjoy in the results of their investment and research.

Although this reply may be entirely consistent with current PTO policy, legal and cultural assumptions for centuries have associated patents with invention and therefore with the ownership of intellectual property. These assumptions cannot be dismissed. First, patents confer the three defining incidents of ownership: the right to sell, the right to use, and the right to exclude. If someone produced and used, say, human EPO, it would be a violation of the Genetic Institute patent. But all human beings produce EPO as well as other patented proteins in our bodies. Does this mean we are infringing a patent? Of course not. But why not, when producing and using the same protein outside our bodies does infringe the patent? If a biotech firm patents a naturally occurring chemical compound for pesticidal use, does that mean that indigenous people who have used that chemical for centuries will no longer be allowed to extract and use it? That such questions arise suggests that patents confer real ownership of products of nature, not just abstract commercial rights.

Second, intuitive ties founded in legal and cultural history connect patents with the moral claim to intellectual property. For centuries the PTO followed the Supreme Court in insisting that “a product must be more than new and useful to be patented; it must also satisfy the requirements of invention.” The requirements of invention included a contri-bution to useful knowledge-some display of ingenuity for which the inventor might take credit. By disclosing this new knowledge (rather than keeping it a trade secret), the inventor would contribute to and thus repay the store of knowledge on which he drew. One simply cannot scoff, as industry representatives sometimes do, at a centuries-long tradition of legal and cultural history, enshrined in every relevant Supreme Court decision, that connects intellectual property with moral claims based on contributions to knowledge.

Religious leaders who decry current PTO policy in granting intellectual property rights to products of nature have suggested alternative ways to give the biotech industry the kinds of commercial protections it seeks. Rabbi David Saperstein, director of the Religious Action Center of Reform Judaism in Washington, D.C., has proposed that ways be found “through contract laws and licensing procedures to protect the economic investment that people make . . .” On the industry side, spokespersons have been eager to assure their clerical critics that they do not want to portray themselves as the authors of life. What industry wants, they argue, is not to upstage the Creator but to enjoy a legal regime that protects and encourages investment. Industry is concerned with utility and practical results; religious and other critics are understandably upset by the moral implications of current PTO policy.

It is not hard to see the outlines of a compromise. If Congress enacts a Genetic Patenting Act that removes the “description requirement” for genetic materials, as it has removed this requirement for hybridized plants, patents conferred on these materials may carry no implications about intellectual authorship. Such a statute, explicitly denying that biotech firms have invented or designed products of nature, might base gene patenting wholly on instrumental grounds and thus meet the objections of religious leaders.

A new statutory framework could accommodate all these concerns if it provided the kinds of monopoly commercial rights industry seeks without creating the implication or connotation that industry “invents,” “designs,” or “owns” genes as intellectual property. In other words, some middle ground modeled on the earlier plant protection acts might achieve a broad agreement among the parties now locked in dispute.

Extending Manufacturing Extension

At the start of this decade, U.S. efforts to help smaller manufacturers use technology were patchy and poorly funded. A handful of states ran industrial extension programs to aid companies in upgrading their technologies and business practices, and a few federal centers were also getting underway. Eight years later the picture has changed considerably. Seventy-five programs are now operating across the country under the aegis of a national network known as the Manufacturing Extension Partnership (MEP). This network has not only garnered broad industrial and political endorsement but has also pioneered a collaborative management style, bringing together complementary service providers to offer locally managed, demand-driven services to small manufacturers. That approach contrasts markedly with the fragmented “technology-push” style of previous federal efforts. Most important, early evidence indicates that the MEP is helping companies become more competitive. But to exert an even more profound impact, the MEP needs to pursue a strategic, long-term approach to ensuring the vitality of small manufacturers.

When proponents advanced ideas in the late 1980s for a national system of manufacturing extension, U.S. firms were facing stiff new competition from other countries. A wrenching decade of restructuring followed by strong domestic growth has boosted the competitive position of the U.S. economy. Yet most of the gains in U.S. manufacturing performance have occurred among larger companies with the resources to reengineer their industrial processes, introduce new technologies and quality methods, and transform their business practices. The majority of small firms lag in productivity growth and in adopting improved technologies and techniques. Indeed, in recent years, per-employee value-added and wages in small U.S. manufacturers have fallen increasingly behind the levels attained in larger units.

Industrial extension focuses mainly on these small manufacturers. There are some 380,000 industrial companies in the United States with fewer than 500 employees. Small manufacturers frequently lack information, expertise, time, money, and confidence to upgrade their manufacturing operations, resulting in under-investment in more productive technologies and missed opportunities to improve product performance, workforce training, quality, and waste reduction. Private consultants, equipment vendors, universities, and other assistance sources often overlook or cannot economically serve the needs of smaller firms. System-level factors, such as the lack of standardization, regulatory impediments, weaknesses in financial mechanisms, and poorly organized inter-firm relationships, also constrain the pace of technological diffusion and investment.

The MEP addresses these problems by organizing networks of public and private service providers that have the resources, capabilities, and linkages to serve smaller companies. Manufacturing extension centers typically employ industrially experienced field personnel who work directly with firms to identify needs, broker resources, and develop appropriate assistance projects. Other services are also offered, including information provision, technology demonstration, training, and referrals. Given the economy-wide benefits of accelerating the deployment of technology and the difficulties many companies face in independently implementing technological upgrades, the MEP is a classic example of how collective public action in partnership with the private sector can make markets and the technology diffusion process more efficient. For example, rather than competing with private contractors, as some critics feared, the MEP helps companies use private consultants more effectively and encourages firms to implement their recommendations.

The federal effort began when the 1988 Trade and Competitiveness Act authorized the Department of Commerce’s National Institute of Standards and Technology (NIST) to form regional manufacturing technology centers. The first few years brought just a small increase in federal support; only with the Clinton administration’s pledge to build a national system did the MEP take off. Under a competitive process managed by NIST, resources from the Technology Reinvestment Project–the administration’s defense conversion initiative–and the Commerce Department became available. The states had to provide matching funds, with private industry revenues expected as well. Existing state manufacturing extension programs were expanded and new centers were established so that, by 1997, the MEP achieved coverage in all fifty states. In FY97, state monies plus fees from firms using MEP services matched some $95 million in federal funding. Congress has endorsed a federal budget of about $112 million for the MEP in FY98–more than a sixfold increase over the 1993 allocation.

MEP centers directly operate more than 300 local offices and work with more than 2,500 affiliated public and private organizations, including technology and business assistance centers, economic development groups, universities and community colleges, private consultants, utilities, federal laboratories, and industry associations. Through this network, the MEP services reach almost 30,000 firms a year. (Some two-thirds of these companies have fewer than 100 employees.) The program is decentralized and flexible: Individual centers develop strategies and services appropriate to state and local conditions. For example, the Michigan Manufacturing Technology Center specializes in working with companies in the state’s automotive, machine tool and office furniture industries. Similarly, the Chicago Manufacturing Center has developed resources to address the environmental problems facing the city’s many small metal finishers.

Originally, Congress envisaged that NIST’s manufacturing centers would transfer advanced cutting-edge technology developed under federal sponsorship to small firms. But MEP staff soon realized that small companies mostly need help with more pragmatic and commercially proven technologies; these firms often also needed assistance with manufacturing operations, workforce training, business management, finance, and marketing to get the most from existing and newly introduced technologies. Most MEP centers now address customers’ training and business needs as well as promote technology. In general, centers have found that staff and consultants with private-sector industrial experience are better able than laboratory researchers to deliver such services.

Most manufacturing extension projects result in small but useful incremental improvements within firms. But in some cases, much larger results have been produced. A long-established pump manufacturer with nearly 130 employees was assisted by the Iowa Manufacturing Technology Center to gain an international quality certification; subsequently, the company won hundreds of thousands of dollars in new export sales. In western New York, a 14-employee machine shop struggled with a factory floor that was cluttered with machinery, scrap, and work in progress. The local MEP affiliate conducted a computer-aided redesign of the shop floor layout and recommended improved operational procedures, resulting in major cost savings, faster deliveries, freed management time, and increased sales for the company. In Massachusetts, manufacturing extension agents helped a 60-employee manufacturer of extruded aluminum parts address productivity, production scheduling, training, and marketing problems at its 50-year old plant. The company gives credit to MEP assistance for tens of thousands of dollars of savings through set-up time reductions, more timely delivery, and increased sales.

Systematic evaluation studies have confirmed that the MEP is having a positive effect on businesses and the economy. For example, in a 1995 General Accounting Office survey of manufacturing extension customers, nearly three-quarters of responding firms said that improvements in their overall business performance had resulted. Evaluations of the Georgia Manufacturing Extension Alliance reveal that one year after service, 68 percent of participating firms act on project recommendations, with more than 40 percent of firms reporting reduced costs, 32 percent reporting improved quality, and 28 percent making a capital investment. A benefit-cost study of projects completed by the Georgia program found combined net public and private economic benefits exceeded costs by a ratio of 1.2:1 to 2.7:1. A Michigan study using seventeen key technology and business performance metrics found that manufacturing technology center customers improve faster overall than comparable firms in a control group that did not receive assistance. A 1996 study of New York’s Industrial Extension Service (an affiliate of the MEP) also found that the business performance of assisted firms was improved when compared with similar companies that did not receive assistance. Finally, a recent Census Bureau analysis indicates that industrial extension assisted firms have higher productivity growth than non-assisted companies, even after controlling for the performance of firms prior to program intervention.

Challenges and issues

The MEP has achieved national coverage and established local service partnerships; most important, the early evidence indicates that MEP services are leading to desired business and economic goals. However, now that the MEP has completed its start-up phase, several challenges and issues need to be addressed to enable the program to optimize the network it has established and to improve the effectiveness of manufacturing extension services in coming years.

Strategic Orientation. Although MEP affiliates are helping firms become leaner and more efficient, lower costs and higher efficiency are only part of a strategic approach to manufacturing and technology-based economic development. A continuing concern is that although the number of small manufacturing firms in the United States is growing, their average wages have lagged those of larger companies. Part of the problem is that many small companies produce routine commodity products with relatively low added value that are subject to intense international competition. If these firms are to offer higher wages, they must not only become more productive but also find ways to become more distinctive, responsive, and specialized. These capabilities may be promoted by deploying more advanced manufacturing processes, initiating proactive business strategies, forming collaborative relationships with other companies, or developing new products. But to help small firms move in these directions, the MEP will need to adjust its service mix to offer assistance that goes well beyond short-term problem solving for individual firms.

For instance, to help more small firms to develop and sell higher value products in domestic or export markets, the MEP should increase services that focus on new product design and development, and develop even stronger links to R&D centers and financing and marketing specialists. Already under way is a “supply-chain” initiative that aims to upgrade suppliers of firms or industries that are located across state boundaries. The MEP should do more along such lines by supporting initiatives that help suppliers and buyers talk to one another. The MEP has sponsored pilot projects to offer specialized expertise in crosscutting fields such as pollution control or electronic commerce. Again, such efforts should be expanded to stimulate the adoption of emerging technologies and practices, such as those involved with environmentally conscious manufacturing methods, the exploitation of new materials, and the use of new communication technologies. These efforts should be coupled with a greater emphasis on promoting local networks of small firms to speed the dissemination of information and encourage collaborative problem solving, technology absorption, training, product development, and marketing.

The Least-Cost Way to Control Climate Change

In December 1997 in Kyoto, Japan, representatives of 159 countries agreed to a protocol to limit the world’s emissions of greenhouse gases. Now comes the hard part: how to achieve the reductions. Emissions trading offers a golden opportunity for a company or country to comply with emissions limits at the lowest possible cost.

Trading allows a company or country that reduces emissions below its preset limit to trade its additional reduction to another company or country whose emissions exceed its limit. It gives companies the flexibility to choose which pollution reduction approach and technology to implement, allowing them to lessen emissions at the least cost. And by harnessing market forces, it leads to innovation and investment. The system encourages swift implementation of the most efficient reductions nationally and internationally; provides economic benefit to those that aggressively reduce emissions; and gives emitters an economically viable way to meet their limits, leading to worldwide efficiency in slowing global warming.

The design of a U.S. cap-and-trade program should follow the basic features of the highly successful Acid Rain Program.

Benefits to the United States from emissions trading would most likely be achieved domestically. However, trading between developed nations and between developed and developing nations has much to offer. It can accelerate investment in developing countries. And it gives developed countries the flexible instruments they say they need to garner the political support necessary to agree to large emissions reductions. In a recent speech in Congress, Sen. Robert Byrd (D-W. Va.) stated that, “reducing projected emissions by a national figure of one-third does not seem plausible without a robust emissions-trading and joint-implementation framework.”

If effective trading systems are to be designed, tough political and technical issues will need to be addressed at the Conference of the Parties in Buenos Aires in November 1998-the next big meeting of the nations involved in the Kyoto Protocol. This is especially true for international trading, because different nations have significantly different approaches to reducing greenhouse gases and because many developing countries are opposed to the very notion of trading. However, if trading systems can be worked out, the United States and the world could meet emissions commitments at the lowest possible cost.

The challenge

The Kyoto Protocol requires developed countries to reduce greenhouse gas (GHG) emissions to an average of 5 percent below 1990 levels in the years from 2008 to 2012. The United States has agreed to cut emissions by 7 percent below its 1990 level. Russia and other emerging economies have somewhat lesser burdens. However, estimates indicate that at current growth rates, the United States would be almost 30 percent above its 1990 baseline for GHG emissions by 2010. Most emissions come from the combustion of fossil fuels. Carbon dioxide is responsible for 86 percent of U.S. emissions, methane for 10 percent, and other gases for 4 percent. Substantial reductions will be needed.

One strategy would be a tax on the carbon content of fuels, which determines the amount of GHGs emitted when a fuel is burned. Although this may be the most efficient way to reduce GHG emissions, it is politically unrealistic in the United States. Our domestic strategy is more likely to be a choice between a trading system linked with a cap on overall emissions and the more traditional approach of setting emission standards for each sector of the economy.

The strategy in other countries may be different. During the Kyoto debates, a sharp difference was evident between the United States, which favored a trading approach to achieving national emissions targets, and European nations, which are contemplating higher taxes as well as command-and-control strategies such as fuel-efficiency requirements for vehicles and mandated pollution controls for utilities and industry. Nonetheless, all countries can still benefit from international trading.

Why trading can work

An emissions trading system allows emitters with differing costs of pollution reduction to trade pollution allowances or credits among themselves. Through trading, a market price emerges that reflects the marginal costs of emissions reduction. If transaction costs are low, trading leads to overall efficiency in meeting pollution goals, because each source can decide whether it is cheaper to reduce its own emissions or acquire allowances from others.

Trading creates benefits by providing flexibility in technology choices both within and between firms. For example, consider an electric utility that burns coal in its boilers. To comply with its emissions limit, it could add costly scrubbers to its smokestacks or it could buy allowances to tide it over until it is ready to invest in much more efficient capital equipment. The latter option often results in lower or no long-term costs when savings from the new technology and avoidance of the costly quick fix are figured in. It also creates the potential for greater long-term pollution reductions. By not spending money on the quick fix, the utility has more capital to invest in more efficient future processes. This point is critical, because reductions beyond those prescribed in the Kyoto Protocol will be needed in the years after 2010 to stabilize global warming for the rest of the 21st century.

If full trading between all countries were allowed, the costs of complying with the Kyoto Protocol would fall dramatically.

Some political and environmental groups oppose trading, equating it to selling rights to pollute. But this view fails to recognize the substantial differences in business processes and technologies, which may allow one source to reduce emissions much more cheaply than another. It also undervalues the importance of timing in investment decisions; the ability to buy a few years of time through trading may allow companies to install improved equipment or make more significant process changes. Trading leads to the firms with the lowest cost of compliance making the most reductions, creating the most cost-efficient system of meeting pollution goals.

Trading is also denigrated by those who say it can create emissions hot spots that result in local health problems. But GHGs have no local effects on human health or ecosystems; they are only problematic at their global concentration levels in the upper atmosphere.

Why a cap is needed

There are two prevailing emissions trading approaches: an emissions cap and allowance system and an open-market system. The cap-and-trade system establishes a hard cap on total emissions, say for a country, and allocates allowances to each emitter that represent its share of the total emissions. Sources could either emit precisely the amount of allowances they are issued, emit fewer tons and sell the difference or store (bank) it for future use, or purchase allowances in order to emit more than their initial allotment. Allowances are freely traded under a private system, much as a stock market operates. A great deal of up-front work must be done to establish baselines for the emitters and to put a trading process in place, but once that work is completed, trades can take place freely between emitters. No regulatory approval is needed. Environmental compliance is ensured because each emitter must have enough allowances to equal its emissions limit each year.

The beauty of the cap-and-trade system is an elegant separation of roles. The government exerts control in setting the cap and monitoring compliance, but decisions about compliance technology and investment choices are left to the private sector.

The best example of such a system in found in the U.S. Acid Rain Program. It has been remarkably effective. An analysis by the Government Accounting Office shows that this cap-and-trade system, created in 1990 to halve emissions of sulfur dioxide by utilities, cut costs to half of what was expected under the previous rate-based standard and well below industry and government estimates. What’s more, recent research at MIT indicates that a third of all utilities complied in 1995 at a profit. This happened because there were unforeseen cost savings in switching from high-cost scrubbers to burning low-sulfur coal, and because trading enabled a utility to transfer allowances between its own units, allowing it to use low-emitting plants to meet base loads and high-emitting plants only at peak demand periods.

The aversion toward trading expressed by many developing countries ignores the many benefits that could accrue to them.

The open-market trading system works differently. Generally, there is no cap. Regulators set limits for each GHG coming from each source of emissions-say, for carbon dioxide from the smokestacks of an electric utility. Therefore, whenever two emitters want to trade, they must get regulatory approval. Although the up-front work may be less than that required for a cap-and-trade system, the need for approval of each trade makes transaction costs high. Also, there is always uncertainty about whether a trade will be approved, and approvals can take weeks or months, all of which reduce the incentive to trade and create an inefficient system.

The most recent results from the U.S. Acid Rain Program show that transaction costs are about 1.5 percent of the value traded, which is about the same as those for trades in a stock market. Transaction costs for open-market trading are an order of magnitude or more higher. Not surprisingly, the results of open-market trading in several U.S. states to reduce emissions of carbon monoxide, nitrogen oxides, and volatile organic compounds have been generally disappointing.

An emissions cap-and-trade system would reduce GHGs within the United States at very low cost. Trading between developed countries and between developed and developing countries could help nations meet their Kyoto Protocol targets, too. Let’s consider what is needed for each system.

Trading at home

The protocol allows a country to use whatever means it wants to achieve its own limit, so there is no restriction on creating a good cap-and-trade system within the United States. The first step would be to allocate the U.S. allotment of carbon emissions among emitters. Emissions come from several major sectors: electricity generation contributes 35 percent; transportation, 31 percent; general industry, 21 percent; and residential and commercial sources, 11 percent. However, because large sources are responsible for most GHGs, the United States could capture between 60 and 98 percent of emissions by including only a few thousand companies in the system.

Possibly the biggest cap-and-trade question for the United States is whom to regulate. The most efficient system would be to impose limits on carbon fuel providers-the coal, oil, and gas industries. These fuels account for up to 98 percent of carbon emissions. Industry groups are concerned, however, that regulating fuel providers is tantamount to a quota on fossil fuels, although similar reductions in fossil fuels would be required by any GHG regulation.

The alternative is to impose limits on fuel consumers-utilities, manufacturers, automobiles, and residential and commercial establishments. This method is less efficient, covering 60 to 80 percent of emissions, because it cannot practically handle the thousands of small industrial or commercial firms, not to mention residences, and because it does not provide incentives to reduce vehicle miles traveled. These inefficiencies will lead to higher overall costs and less burden-sharing.

However, political considerations will be as important as technical ones in choosing whom to regulate, and a hybrid system is possible. The most likely hybrid would be direct regulation of electric utilities and industrial boilers, capturing most of the country’s combustion of coal and natural gas. A fuel-provider system would then be used to regulate sales of petroleum products and fossil fuels to residential and commercial markets. This may be politically expedient and could be almost as efficient as a pure fuel-provider model.

The design of the cap-and-trade program should follow the basic features of the U.S. Acid Rain Program. That program creates a gold standard with three key elements: a fixed emissions cap, free trading and banking of allowances, and strict monitoring and penalty provisions.

Several added benefits could be incorporated. First, the cost of continuous emissions monitoring could be reduced because emissions of carbon dioxide are very accurately measured by the carbon content of fuel. Second, the system could allow trading between gases. This could spur significant reductions of methane, which contributes 10 percent of the warming potential of U.S. emissions. A methane molecule has 21 times the warming potential of a carbon dioxide molecule, and certain sources of methane-landfills, coal mines, and natural gas extraction and transportation systems-could be included. Methane control can be low-cost or even profitable, because the captured methane can be sold; thus, trading between carbon dioxide and methane sources could be a cheap way to reduce the U.S. contribution to global warming.

A third design option would hinge on whether to allocate allowances to existing emitters for free or to auction them. Allocating allowances, as in the acid rain program, is the most politically expedient option, but burdens later entrants, who must buy allowances from others who have already received them. Auctioning allowances would make them available to all and could have a dual benefit if the monies are used to reduce employment taxes or spur investment.

The U.S. Acid Rain Program’s cap-and-trade system has cut the cost of sulfur dioxide compliance to $100 per ton of abated emissions, compared to initial industry estimates of $700 to $1,000 per ton and Environmental Protection Agency (EPA) estimates of $400 per ton. The same kind of cost reductions can be expected in a GHG system. The National Academy of Sciences has estimated that the United States could reduce 25 percent of its carbon emissions at a profit and 25 percent at very low or no cost, because of the hundreds of opportunities to achieve energy efficiency or switch fuels in our economy. Examples given by the Academy include switching from coal to natural gas in electricity generation, improving vehicle fuel economy, and creating energy-efficient buildings. The low net costs of GHG abatement would be further enhanced by the Clinton administration’s recent proposal to speed the development of efficient high-end technologies.

As the world’s largest emitter of GHGs, the United States should begin to implement a cap-and-trade system now. Market signals need to be sent right away to start our economy moving toward a less carbon-intensive development path. To prompt action, EPA should set an intermediate cap, perhaps for the year 2005, because the Kyoto Protocol requires countries to show some form of “significant progress” by that year.

Trading between developed countries

International emissions trading could contribute substantially to curbing many nations’ cost of compliance with the Kyoto Protocol. An assessment by the Clinton administration concluded that compliance costs could fall from $80 per ton of carbon to $10 to $20 per ton if full trading between all countries were allowed. A more realistic analysis done by the World Resources Institute examined 16 leading economic models and concluded that overall costs are much lower, but that international trading could still reduce the cost by around 1 percent of gross national product over a 20-year period.

Rules for trading between nations must begin to be drawn at the Conference of the Parties this November. However, there are key contentious issues, such as how to ensure the high credibility of trades through good compliance and monitoring systems and how to create a privately run system in which transactions can be made in minutes, not the months or even years required for government approval mechanisms. Whether transaction costs are high or low will probably determine the success of international trading.

Article 17 of the Kyoto Protocol authorizes emissions trading between countries listed in the protocol’s Annex B, which currently includes all industrialized countries. It is, however, short on details (it contains only three sentences). It will be up to the Conference of the Parties to define the rules, notably those for emissions reporting and verification and enforcement of violations penalties. It is critical that the Conference properly design rules that create a system that allows private trading with its low transaction costs. This may be difficult because of the lack of definition in the protocol and differing positions within the international community.

Key issues to be resolved include the following:

Trading by private entities. Article 17 makes no reference to it, but trading by private entities is fundamental. Requiring government approval for each trade creates such uncertainty, high transaction costs, and delays that the benefits of trading are substantially lost.

Monitoring and enforcement. High-quality monitoring and compliance systems are essential. At a minimum, this means accurate monitoring, credible government data collection and enforcement, and stiff penalties for noncompliance. In the United States, an early emissions trading system adopted to phase out leaded gasoline in the late 1980s experienced significant violations and enforcement actions until EPA tightened the rules. In the U.S. Acid Rain Program, high-quality monitoring, a public Allowance Tracking System, and steep penalties have led to 100 percent compliance-a remarkable achievement.

Compatibility of trading systems. Developed countries may adopt a wide variety of domestic strategies for achieving their GHG targets. Emissions trading would be facilitated if each were to adopt the cap-and-trade approach, but perhaps only the United States will do so. If other countries pursue other avenues, they could only participate in international trading through an open-market trading system, which involves substantial transaction costs. To ensure the least regulation and lowest cost for all, other countries should adopt the cap-and-trade model.

The “hot air” issue. The economic collapse of the former Soviet republics means that many central and eastern European countries are expected to be approximately 150 million tons below their GHG limits each year during the 2008-2012 commitment period. The protocol allows them to trade these “hot air” tons, even though they would never have been emitted. Trading for these tons could reduce other developed countries’ compliance obligations by an average of 3 percent, essentially raising the GHG cap. This issue muddies the waters because it mixes concerns about the overall cap with the issue of trading. Although trading should be allowed to function freely, it is unfortunate that the protocol allows the inclusion of these non-emissions.

The United States should also review two other trading-related provisions. Article 4 allows several developed countries to jointly fulfill their aggregate commitment to reduce GHG emissions. Although this umbrella approach is a potentially attractive vehicle for trading, its conditions are oriented toward the specific situation of the European Union. One major drawback is that the provision requires each country’s commitment to be established up front, which would restrain the operation of a more flexible market.

Article 6 authorizes a system of joint implementation among developed countries. Joint implementation differs from emissions trading because it requires that any emissions reduction done for trading be “additional to any that would otherwise occur.” This is a difficult case for any country to prove and requires even more oversight of each trade than the open-market approach. Such high transaction costs are likely to make this provision of little use, unless there is a failure to agree on good rules for regular emissions trading under Article 17.

Trading with developing countries

Trading between developed and developing countries has been hotly debated throughout the treaty process. For a developed country, the appeal is that investments made in developing countries, which are generally very energy-inefficient, can result in emissions reductions at very low cost, making allowances available. For a developing country, trading could be attractive because its sale of allowances could generate capital for projects that help it shift to a more prosperous but less carbon-intensive economy.

However, most developing countries, led by China and India, are opposed to trading. First, they simply distrust the motives of developed nations. Second, they rightly point out that the developed world has created the global warming problem and should therefore clean it up. Although legitimate, this second view ignores the many benefits that trading can bring to developing countries.

Many nongovernmental organizations (NGOs) are also wary of trading, claiming that the availability of allowances from developing countries will allow developed countries to avoid having to reduce their own emissions. This is unlikely, however. The United States, for example, will have to reduce its emissions by 37 percent by 2010 to reach its target. Developing countries that are willing to trade will simply not be able to accumulate enough tons to offset this large reduction. Indeed, trading with developing countries is likely to account for at most 10 to 20 percent of the reductions needed by a developed country.

Another major problem in trying to trade with developing countries lies in the weak emissions monitoring and compliance systems currently in place in many of them. Strengthening the basic institutional and judicial framework for environmental law may be necessary in many countries and could take considerable investment and many years. The protocol authorizes two possible ways for a developing country to participate in trading: emissions reduction projects under a provision called the Clean Development Mechanism (CDM) or regular emissions trading under Article 17. The choice depends on whether a developing country makes a specific emissions reduction commitment.

Without such a commitment, a developing country can trade only under the CDM, which is vaguely defined. Depending on decisions made at the Conference of the Parties, the CDM could be anything from a ponderous multilateral government organization whose bureaucracy would dilute any advantage of trading, to a certifying entity that creates a private system for approving trades. This second mechanism would be consistent with the kind of private trading system needed. Useful models for it may be found in the certifying mechanisms of the International Standards Organization or the Forest Stewardship Council.

Attention must be paid to reducing the high transaction costs of CDM trade, however. For a country’s project to qualify under the CDM, the emissions reduction must be “additional to what would have otherwise occurred.” Would a project to switch a utility from coal to natural gas combustion have been pursued anyway? Would a forest protected under a project have survived anyway? This is difficult to ascertain, as demonstrated by an existing pilot program for “activities implemented jointly,” approved by the first Conference of the Parties in 1995. The program addresses the “additionality” issue by requiring extensive review of each trade by the approving governments. This process relies on subjective prediction and can take on average one to two years, leading to very high transaction costs. Future improvements could include privatizing the verification system, standardizing predictive models, and perhaps discounting trades to adjust for uncertainties.

In addition to these difficulties, significant investment under the CDM is unlikely until rules for governing it are approved. This must await the first implementation meeting of the parties to the protocol, which cannot happen until the Kyoto Protocol has been ratified and the parties meet, which would be 2002 at the earliest.

Alternatively, Article 17 allows a developing country to participate fully in trading, with no requirement to show that reductions are additional, if it subscribes to an emissions reduction obligation that is adopted by the Conference of the Parties under Annex B. Because such a commitment for a developing country is likely to be generous, countries making a serious commitment to reductions, such as Costa Rica with its carbon-free energy goal, might well profit from trading.

One approach would be to set the commitment based on the growth baseline concept put forward by the Center for Clean Air Policy. This requires a commitment to reduce the carbon intensity of a country’s economy, which could allow for reasonable growth of emissions while setting firm benchmarks. In this approach, developing countries would not only benefit economically from emissions trading but would take on the kinds of solid commitments needed to achieve the goals of the convention and facilitate ratification of the protocol by the developed countries.

An effective cap-and-trade system implemented within the United States would allow this country to comply with the GHG reductions it has committed to in the Kyoto Protocol at low or no cost. Because the system is a market instrument, it can rapidly bring about the adaptation, innovation, and investment needed to reduce emissions.

International trading can contribute substantially to achieving cost reductions, particularly if a cap-and-trade model with private trading mechanisms can be built into the protocol. Although such a system is unlikely to be fully mapped out at the Buenos Aires meeting in November, the first critical steps must be taken there.

Kyoto and Beyond

The international agreement concluded in Kyoto, Japan, during the first two weeks of December 1997 to limit greenhouse gas emissions to forestall climate warming is variously portrayed as a success or a failure. It was both. The road to Kyoto was pitted with political, economic, and scientific potholes. It is now on to Buenos Aires in November 1998 for another Conference of the Parties to the Framework Convention on Climate Change. To what end?

The United States had proposed that by the period 2008-2012 the world’s nations reduce emissions of greenhouse gases, principally carbon dioxide, to 1990 levels. Binding commitments from the participating nations were sought. As if to confirm the need for binding agreements to reduce greenhouse gas emissions and the futility of voluntary efforts, the U.S. Department of Energy announced two months before the Kyoto meeting that for the six-year period 1990-1996, the United States exceeded by 8 percent its commitment to reduce emissions to the 1990 level by the year 2000.

The position of the United States was carefully crafted over the course of 1997. The administration walked a tightrope of compromise among the diverse domestic and international constituencies affected by and having an interest in the outcome. It was a position that lent itself to criticism by advocacy groups on all sides. An international agreement of sorts was indeed reached in Kyoto, but on terms hardly favorable to the United States. It required the last-minute, hurried intervention of Vice President Gore on a 19-hour mission to Kyoto. The vice president injected sufficient flexibility into the negotiations to bring the nations at the conference to consensus on some issues, but a price was paid.

The agreement at Kyoto was a success in that it preserved the international momentum to continue to address the projected climate warming. The follow-up meeting in Buenos Aires in November 1998 will address many unresolved issues. However, the substance of the agreement failed to meet U.S. objectives. The United States was forced to compromise on some of its most critical concerns. Although the United States achieved a vague agreement on the use of market mechanisms for achieving greenhouse gas emission reductions, it had to agree to reduce greenhouse gas emissions to 7 percent below the 1990 level, a deep reduction from its original proposal. It obtained agreement on binding commitments only from industrialized countries. The developing nations insisted on and were granted a free ride. Further, the agreed reductions varied from country to country, a concession necessary to bring the various nations into agreement. For example, Australia was able to negotiate an increase in its emissions of 8 percent above the 1990 level.

The U.S. debate

The U.S. position was the result of a contentious debate. President Clinton had set the stage on several occasions. In a speech to a special session of the United Nations (UN), he stated that “the science is clear and compelling.” The administration found the economics acceptable. After all, at a January 1997 meeting of the American Economics Association, the economists at the meeting, led by Nobel laureates Robert Solow and Kenneth Arrow, declared, “as economists, we believe that global climate change carries with it significant environmental, economic, social, and geopolitical risks and that preventive steps are justified. Economic studies have found that there are many potential policies to reduce greenhouse gas emissions in which total benefits outweigh total costs.” A U.S. government interagency committee later also concluded that the economic costs would be acceptable. After the Kyoto meeting, an official assessment by the White House was announced in March 1998 by Janet Yellen, chair of the Council of Economic Advisors, declaring that the economic costs would be modest.

The country was exposed to an intense public debate. It was bombarded by a barrage of television commercials, both pro and con; by an outpouring of editorials; by actions of the Congress; and by press reports of conflicting statements from scientists. Congress indicated that it would not ratify any Kyoto agreement that did not include all the nations of the world. In a 95-0 vote in July, the Senate approved a resolution urging the Clinton administration not to sign the Climate Change Pact if developing countries were exempted. The response of the administration was that the Kyoto Treaty would not be submitted for ratification until developing nations could be included in some form of agreement.

Earlier in June, 132 members of the Business Council issued a strong statement that urged the Clinton administration “not to rush to policy commitments until the environmental benefits and economic consequences of the treaty proposals have been thoroughly analyzed.” A month later, Chrysler chairman Robert Eaton expressed the view of the automotive industry in an editorial in the Washington Post: “In response to uncertain science and pressure from environmental activists and from countries eager for our jobs and living standards, the Clinton administration is poised to agree to a UN Global Warming Treaty that will compel us to curtail fossil fuel energy use by 20 percent, one certain consequence of which would be a decline in the country’s economic growth by a similar amount.” Ford CEO Alex Troutman joined in declaring that human effects on climate were very uncertain and that a mass exodus of U.S. factories would likely result from the nation’s commitment to deep emission reductions of the magnitude suggested by the European Community.

Labor organizations such as the United Brotherhood of Carpenters and the United Mine Workers joined business organizations in sponsoring critical newspaper advertisements. These powerful forces, concerned about the consequences of significant greenhouse gas reductions for the U.S. economy and jobs, joined in the strong cautionary warning.

Adaptation by humanity will continue to be the central means of coping with climate change.

Industry, however, was not monolithic in attitude. The president of the Reinsurance Association of America, in support of the administration, summed up the threat this way, “the insurance business is first in line to be affected by climate change; it would bankrupt the industry.” Breaking ranks with most of the fossil energy industries, John Brown, CEO of British Petroleum, the world’s third largest oil and gas company, in a speech at Stanford University, expressed the view that there was enough evidence that pollution was contributing to global warming to begin to take precautionary action. He was later joined by other petroleum industry leaders of such companies as Royal Dutch Shell and Sun Oil.

Many of the public interest groups with deep concerns about environmental issues, as well as the environmental organizations within the administration, weighed in. The vice president, whose book Earth in the Balance had become a national primer on the catastrophes facing the planet in the absence of action on climate warming, was convinced that action by the United States was essential. His views were buttressed by the report of the Intergovernmental Panel on Climate Change (IPCC). This international group of scientists, operating under the aegis of the World Meteorological Organization and the UN Environment Programmme, had been charged with periodic assessments of the status of our knowledge about climate change and its possible environmental and economic effects. Although this group recognized the uncertainties in their projections and assessments, it nevertheless concluded that the “balance of evidence” suggested that the effects of humanity on climate change had been detected. Its views and conclusions became the basis for governmental decisions throughout the world. The IPCC provided a powerful confirmation of the fears of the administration.

A cadre of scientists led by Fred Seitz, past president of the National Academy of Sciences, Richard Lindzen at MIT, and Fred Singer of the Science and Environmental Policy Project continued to question the warming projections of the climate models and claimed that there was little evidence in the data for such a warming. In fact, Singer repeatedly emphasized that the surface temperature records that indicated a global warming were contradicted by the satellite temperature measurements of the past two decades. Nevertheless, most of the world’s experts believe and worry that substantial anthropogenic global climate change will occur, although there is much uncertainty as to the timing, intensity, and regional effects.

A strong precautionary stance was urged by environmental groups and others. Jessica Matthews, then a sSenior fellow at the Council on Foreign Relations, expressed it best when, in an op-ed in the Washington Post she said, “It is time and past to move beyond the jokes, the sneers, the name calling, the know-nothingism, and the false controversies and on to the real choices. The solid body of scientific evidence obliges us to ask what we’re going to do about global warming and how much we are willing to pay.” The backlash to the industry views, as expected, came also from environmental advocacy groups such as the Natural Resources Defense Council and the Council for an Energy Efficient Economy. In response to the views of automotive executives, the officials of these organizations countered that “rather than relying on bad science and economics to deny responsibility, major companies like Chrysler should acknowledge that global warming poses a serious risk. Their products are contributing to the risk, and with the risk comes an opportunity-industry should seize this opportunity and support a strong agreement to protect the planet in Kyoto.”

The battle reflected the struggle within the administration. Joining the battle were the National Economic Council and the Treasury Department, urging caution because of the economic costs to the country. Deep emission reductions would necessitate reducing energy use by 25 to 35 percent below what was projected for the year 2010. On the other side were the agencies of the government charged with environmental quality. Katie McGinty of the Council of Environmental Quality voiced the vice president’s views. Undersecretary of State Stuart Eisenstadt, who ultimately replaced Undersecretary Tim Wirth as U.S. chief negotiator, played a strong role. Compromise was inevitable, and the U.S. position reflected that.

Another consideration was the reaction of other governments to the administration’s proposals at a preparatory meeting in Berlin in October. As reported in the Washington Post, Germany’s Environment Minister, Angela Merkel, called the U.S. proposal “disappointing and insufficient.” The Japanese Prime Minister, Yutaro Hashimoto, stated that “there might have been room for further efforts.” Peter Jorgenson, the spokesman for the 15-nation European Union’s executive committee, is reported to have summed it up by saying, “it is simply not good enough. There must be something better coming from the White House if the United States wants to face up to its global responsibilities.” As might be expected, China, leading the developing world, called on the rich countries to cut their emissions by 7.5 percent below the 1990 level by 2005, and by 35 percent by 2020-an impossible goal.

The controversy was played out before the general public through advocacy advertisements in the mass media, accompanied by public relations campaigns. The television spots were particularly graphic and focused on the unfairness and unworkability of any treaty that did not include all nations and on the economic impacts, illustrated by estimated job losses and higher gasoline prices. The spots in favor of a treaty appealed to the need for wise stewardship of Earth to forestall unacceptable consequences and the overwhelming scientific support for the reality of climate change. Newspaper ads conveyed the same general themes.

The administration, no stranger to the importance of public relations in achieving support for its ideas, mounted an unprecedented campaign to convince key constituencies that the threat of global warming is real. It marshaled a group of scientists led by Nobel laureates Mario Molina, Henry Kendall, and F. Sherwood Rowland, in what the White House called an East Room Roundtable, to support its position. The president indicated that failure to act could lead to widespread ecological disasters, including killer heat waves, severe floods and droughts, an increase in infectious diseases, and rising sea levels that could swamp thousand of miles of coastal Florida and Louisiana. White House forums were organized in many regions of the country. Some brought together the business, industrial, and labor leadership; others assembled experts on science, technology, economics, and the environment.

The White House even organized a TV weather forecasters’ conference. The president, vice president, and half a dozen leading climate scientists undertook to brief the forecasters on the climate issues and the overwhelming scientific opinion supporting action. The administration offered as an enticement the venue of the White House grounds as a backdrop for the TV weather forecasts. The president indicated that he was not seeking to influence them but was effective in doing just that. He said, “I want to try to get America to accept the fact that the majority of scientific opinion, the overwhelming majority of scientific opinion, is accurate. I want us to make a commitment, therefore, to go to Kyoto with binding agreements.”

The public was treated to an outpouring of news reports that could only have added to the confusion. Sensational headlines in the press, although in many cases based on legitimate scientific studies, were part of the cause. The great blizzard along the east coast of the United States in January 1996 was attributed to global warming. One report cited evidence that the spring season was one week shorter than in the previous decade across much of the northern hemisphere because of climate warming. Late in the 1996 hurricane season, Hurricane Fran hit North Carolina. The headline explained that this was just a harbinger of what could be expected from global warming. The hurricane season of 1997 seemed to contradict these views; it was noteworthy for the paucity of storms. The race to be the first to claim that the average temperature of a year was the warmest on record led to conflicting estimates for 1995. The silliness was emphasized by reports that butterflies were fleeing their normal haunts because of global warming.

The Kyoto effect

A fair question is whether the agreements reached in Kyoto will lead to actions necessary to arrest climate warming. For this we must turn to science, and the answer is clearly no. At most, they are a small step in the right direction. Most scientists agree that the emissions rollback contained in the Kyoto agreement will not do the trick. It will succeed only in delaying the projected climate warming by tens of years. Sweden’s Bert Bolin, the highly regarded chairman of the IPCC, in a post-Kyoto article in Science magazine declared, “If no further steps are taken during the next ten years, carbon dioxide will increase in the atmosphere in the first decade of the next century essentially how it has done during the past few decades.” In 1990, six billion tons of carbon were being emitted to the atmosphere, of which roughly 50 percent remained resident, thus continuing the increase in carbon dioxide concentrations. Rollbacks in carbon dioxide emissions of 60 to 80 percent would be necessary to stabilize the carbon dioxide concentrations, and presumably the climate, at present levels.

Another fair question is the acceptability of the economic costs of the greenhouse gas emission reductions agreed to at Kyoto. Here opinions differ widely. To make economic projections, it is necessary to specify not only the effects of a climate warming on all human activities but also to assume the size and distribution of the global population, the rates of economic growth of the various regions of the world, the rates and types of technological changes, and the effectiveness of economic policy measures such as taxes and emissions trading regimes. Economists deal with the unknowable. For instance, changes as dramatic as the collapse of the Berlin Wall and the present economic difficulties in East Asia were completely unforeseen. Economists can devise scenarios of the future economic and technological global systems only on the basis of present circumstances and long-term trends. Nevertheless, economic projections are valuable because they provide a context for policy action.

There is a need for a global energy technology strategy and a program to implement it.

More than a dozen government and private groups have projected economic costs in various ways. Estimates vary depending on assumptions that are incorporated in the economic models. According to the World Resources Institute, there is “little apparent consensus.” A government Interagency Analytic Team considered a range of economic models, as has the World Resources Institute. Both concluded, on the basis of their own models, that the costs would be substantial but probably acceptable, and under the most favorable assumptions could have a positive effect on the economy. The models included assumptions about cross-border trading of emission rights to reduce economic losses; accelerated technological developments to reduce emissions; and the use of taxes on carbon or other sources of revenue such as the auctioning of emission permits for federal budget reduction, thus reducing the need for federal borrowing and indirectly releasing private investment capital.

According to some modeling studies, the price of carbon would increase by hundreds of dollars per ton. The resulting economic losses would range between 0.2 percent and 2.4 percent of gross domestic product. The Charles River Associates, a private group, indicated in an economic study for the automotive industry that the United States would suffer a 1 percent loss in total economic output. The conservative Heritage Foundation found that U.S. drivers could look forward to a 70-cent-per-gallon rise in gasoline prices and a loss of 100,000 jobs in the steel industry alone. The White House assessment is a 4- to 6-cent increase in the cost of a gallon of gasoline and a 3 to 5 percent increase in the cost of electricity. Whatever the economy-wide losses might be, energy-intensive industries would be severely hit. The vice president, in a press conference, said that he was convinced that the U.S. public would accept the needed measures. Because of the wide range of estimates of economic consequences, the United States and other countries would, in fact, be carrying out a massive experiment with the U.S. economy with unknown results.

Useful actions

Because the agreed-on reductions in carbon dioxide emissions cannot “stabilize” climate, even under the most favorable and economically viable scenarios, the nations need to rethink the course they are on. They need to face up to some well-understood facts about climate and society.

  1. Global climate responds to the total concentration of greenhouse gases in the atmosphere. Warming is independent of the temporal path to these concentrations.
  2. Adaptation by humanity has historically been the central means of coping with climate change. That will continue to be the case in the future if warming occurs.
  3. When costs and benefits are disassociated, costs will be borne reluctantly and benefits will attract free riders. The global benefits of arresting climate warming are not seen as outweighing local economic and social penalties.
  4. Energy consumption and greenhouse gas emissions are in large part a function of global population. Population stabilization can have an enormous impact on emissions reduction.
  5. Greenhouse gas-induced climate change is principally an energy problem. Reducing the carbon dioxide released in the combustion of fossil fuels requires increased efficiency in energy use and “decarbonization” of energy sources.
  6. Only through the development of new and improved energy technologies can reductions in greenhouse gas emissions of the necessary magnitude be achieved without significant economic pain.

These principles suggest a range of policies. The first principle suggests that it is preferable as a policy matter to invest in a more efficient and less carbon-dependent energy system now rather than incur near-term economic costs of greenhouse gas reduction with existing technologies.

The second principle suggests that even if climate change is inevitable, adaptation will in large part cope with the dislocations it causes. Rich nations with considerable resources will adapt more easily than poor nations. Modes of adaptation that require substantial investments will be supported by nations that can afford them, and means will need to be made available to less capable countries. Society has always adapted to climate change as part of a normal response. Dams are built to prevent floods and provide water in arid regions. Seawalls are erected to prevent coastal flooding. Breeding and genetic engineering of crops and animals enables biospheric responses to harsh environments. Actions are taken when climatic reality intervenes.

Population stabilization is essential to climate stabilization. Fortunately, there is good progress to report. The fertility of women in the developing world has been decreasing. Education of women and the application of reproductive technologies are the cause. The fertility of women in developing countries has dropped from six to four births per female in the past several decades. Studies of population demographics by the UN and the International Institute for Applied Systems Analysis in Laxenburg, Austria, now project the possibility that the world’s population will stabilize between 8 billion and 10 billion people.

Population stabilization is essential to climate stabilization.

Finally, there is a need for a global energy technology strategy and a program to implement it. Developments in energy technology show promise, and there has been a gradual awakening to this fact. In September 1997, President Clinton’s Committee of Advisors on Science and Technology, through a subcommittee on energy R&D for the 21st century chaired by John Holdren, laid out wise and far-reaching proposals for research and development and economic measures to facilitate and stimulate investments in energy technologies. In his 1998 State of the Union Message, President Clinton effectively endorsed these suggestions. Five national laboratories of the Department of Energy (DOE) have advanced a road map for U.S. carbon reduction. DOE’s Pacific Northwest National Laboratory is developing a global strategy in conjunction with several private corporations.

The multiplicity of technologies that are available, under development, or in the conceptual stage is impressive. For mobility applications, electric hybrid engines that combine gasoline or diesel engines with electric motors are coming into commercial use. Toyota is already marketing a commercial hybrid in Japan. Fuel cell technology is advancing rapidly. Propulsion systems are now proposed that use hydrogen derived from hydrocarbon feed stocks from which carbon is stripped and sequestered as carbon dioxide.

Efficient and low-carbon energy-generation technologies for power plants are in operation throughout the world. Combined cycle gas turbines that are far more efficient than conventional coal- or oil-fired plants are now widely used; they offer the opportunity for distributed generation, thus relaxing dependence on modern electric grid systems. The Electric Power Research Institute is presently developing an electricity R&D road map that looks decades into the future, seeking low-cost, highly efficient, and environmentally sustainable energy systems.

Nuclear power now generates 17 percent of the electric power in the United States and is moving to claim ever larger shares of the world generation market. Safety fears can largely be overcome by new designs, and even the disposal of radioactive wastes appears doable. Although it is a half century away by most estimates, fusion power is advancing and the International Thermonuclear Energy Reactor program reflects the international interest in fusion. If and when commercial fusion is successful, totally new possibilities for greenhouse gas reduction will present themselves.

Renewable energy sources-wind, photovoltaics, hydropower, and biomass-are now becoming cost competitive. Finally, the efficiency of end products has been and will continue to be increased. Refrigerators, lighting fixtures, and electrical appliances are on the proper trajectory.

If climate warming indeed poses a serious threat to society, the United States and other nations should adopt policies to restrain greenhouse gas emissions through technology and forego the economic costs of the present trajectory. Such measures make economic sense even in the absence of a global warming threat. The United States should use its vast political, economic, and technological power to stimulate and lead a truly international assault on the development of energy technologies. This course makes good planetary sense.